CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=DuMmcVOsNcs
These Neural Networks Empower Digital Artists
The paper "Differentiable Image Parameterizations" is available here: https://distill.pub/2018/differentiable-parameterizations/ Distill editorial article - see how you can contribute here: https://distill.pub/2018/editorial-update/ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Esa Turkulainen, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute paper sweet katojonaifahir. In this series, we have seen many times how good neural network-based solutions are at image classification. This means that the network looks at an image and successfully identifies its contents. However, neural network-based solutions are also capable of empowering art projects by generating new, interesting images. This beautifully written paper explores how a slight tweak to a problem definition can drastically change the output of such a neural network. It shows how many of these research works can be seen as the manifestation of the same overarching idea. For instance, we can try to visualize what groups of neurons within these networks are looking for and we get something like this. The reason for this is that important visual features like the eyes can appear at any part of the image and different groups of neurons look for it elsewhere. With a small modification, we can put these individual visualizations within a shared space and create a much more consistent and readable output. In a different experiment, it is shown how a similar idea can be used with compositional pattern-producing networks or CPPNs in short. These networks are able to take spatial positions as an input and produce colors on the output thereby creating interesting images of arbitrary resolution. Depending on the structure of this network, it can create beautiful images that are reminiscent of light paintings. And here you can see how the output of these networks change during the training process. They can also be used for image morphing as well. A similar idea can be used to create images that are beyond the classical 2D RGB images and create semi-transparent images instead. And there is much, much more in the paper. For instance, there is an interactive demo that shows how we can seamlessly put this texture on a 3D object. It is also possible to perform neural style transfer on a 3D model. This means that we have an image for style and a target 3D model, and you can see the results over here. This paper is a gold mine of knowledge and contains a lot of insights on how neural networks can further empower artists working in the industry. If you read only one paper today, it should definitely be this one. And this is not just about reading, you can also play with these visualizations, and as the source code is also available for all of these, you can also build something amazing on top of them. Let the experiments begin. So, this was a paper from the amazing distal journal, and just so you know, they may be branching out to different areas of expertise, which is amazing news. However, they are looking for a few helping hands to accomplish that, so make sure to click the link to this editorial update in the video description to see how you can contribute. I would personally love to see more of these interactive articles. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute paper sweet katojonaifahir."}, {"start": 4.32, "end": 10.96, "text": " In this series, we have seen many times how good neural network-based solutions are at image classification."}, {"start": 10.96, "end": 16.12, "text": " This means that the network looks at an image and successfully identifies its contents."}, {"start": 16.12, "end": 24.0, "text": " However, neural network-based solutions are also capable of empowering art projects by generating new, interesting images."}, {"start": 24.0, "end": 33.0, "text": " This beautifully written paper explores how a slight tweak to a problem definition can drastically change the output of such a neural network."}, {"start": 33.0, "end": 39.8, "text": " It shows how many of these research works can be seen as the manifestation of the same overarching idea."}, {"start": 39.8, "end": 47.2, "text": " For instance, we can try to visualize what groups of neurons within these networks are looking for and we get something like this."}, {"start": 47.2, "end": 56.56, "text": " The reason for this is that important visual features like the eyes can appear at any part of the image and different groups of neurons look for it elsewhere."}, {"start": 56.56, "end": 67.80000000000001, "text": " With a small modification, we can put these individual visualizations within a shared space and create a much more consistent and readable output."}, {"start": 67.80000000000001, "end": 76.76, "text": " In a different experiment, it is shown how a similar idea can be used with compositional pattern-producing networks or CPPNs in short."}, {"start": 76.76, "end": 87.16000000000001, "text": " These networks are able to take spatial positions as an input and produce colors on the output thereby creating interesting images of arbitrary resolution."}, {"start": 87.16000000000001, "end": 93.76, "text": " Depending on the structure of this network, it can create beautiful images that are reminiscent of light paintings."}, {"start": 93.76, "end": 99.16000000000001, "text": " And here you can see how the output of these networks change during the training process."}, {"start": 99.16, "end": 109.36, "text": " They can also be used for image morphing as well."}, {"start": 109.36, "end": 118.56, "text": " A similar idea can be used to create images that are beyond the classical 2D RGB images and create semi-transparent images instead."}, {"start": 118.56, "end": 121.16, "text": " And there is much, much more in the paper."}, {"start": 121.16, "end": 128.35999999999999, "text": " For instance, there is an interactive demo that shows how we can seamlessly put this texture on a 3D object."}, {"start": 128.36, "end": 133.36, "text": " It is also possible to perform neural style transfer on a 3D model."}, {"start": 133.36, "end": 140.36, "text": " This means that we have an image for style and a target 3D model, and you can see the results over here."}, {"start": 140.36, "end": 149.36, "text": " This paper is a gold mine of knowledge and contains a lot of insights on how neural networks can further empower artists working in the industry."}, {"start": 149.36, "end": 153.36, "text": " If you read only one paper today, it should definitely be this one."}, {"start": 153.36, "end": 164.36, "text": " And this is not just about reading, you can also play with these visualizations, and as the source code is also available for all of these, you can also build something amazing on top of them."}, {"start": 164.36, "end": 166.36, "text": " Let the experiments begin."}, {"start": 166.36, "end": 175.36, "text": " So, this was a paper from the amazing distal journal, and just so you know, they may be branching out to different areas of expertise, which is amazing news."}, {"start": 175.36, "end": 185.36, "text": " However, they are looking for a few helping hands to accomplish that, so make sure to click the link to this editorial update in the video description to see how you can contribute."}, {"start": 185.36, "end": 189.36, "text": " I would personally love to see more of these interactive articles."}, {"start": 189.36, "end": 216.36, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dyzn3Fmtw-E
This Painter AI Fools Art Historians 39% of the Time
Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg The paper "A Style-Aware Content Loss for Real-time HD Style Transfer" is available here: https://compvis.github.io/adaptive-style-transfer/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Esa Turkulainen, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1478831/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. Style Transfer is a mostly AIB technique where you take a photograph, put a painting next to it, and it applies the style of the painting to our photo. A key insight of this new work is that a style is complex, and it can only be approximated with one image. One image is just one instance of a style, not the style itself. Have a look here. If we take this content image and use Van Gogh's road with Cypress and star painting as the art style, we get this. However, if we would have used Starry Night instead, it would have resulted in this. This is not learning about the style, this is learning a specific instance of a style. Here you see two previous algorithms that were instead trained on a collection of works from Van Gogh. However, you see that they are a little blurry and lack detail. This new technique is able to address this really well. Also, look at how convincingly it stylized the top silhouettes of the bell tower. It can also deal with HD videos at a reasonable speed of 9 of these images per second. Very tasty, love it. And of course, a style transfer is a rapidly growing field that are ample comparisons in the paper against other competing techniques. The results are very convincing. I feel that in most cases, it represents the art style really well and can decide where to leave the image content similar to the input and where to apply the style so the overall outlook of the image remains similar. So we can look at these results and discuss who likes which one all day long. But there are also other, more objective ways of evaluating such an algorithm. What is really cool is that the technique was tested by human art history experts and they not only found this method to be the most convincing of all the other style transfer methods, but also thought that the AI produced paintings were from an artist 39% of the time. So this means that the algorithm is able to learn the essence of an artistic style from a collection of images. This is a hugely forward. Make sure to have a look at the paper that also describes a new style-aware loss function and differences in the training process of this method as well. And if you enjoyed this episode and would like to see more, please help us exist through Patreon. In this website, you can support the series and pick up cool perks like early access to these videos, deciding the order of future episodes and more. You know the drill, the dollar a month is almost nothing, but it keeps the papers coming. We also support cryptocurrencies, you'll find more information about this in the video description. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.5, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.5, "end": 9.84, "text": " Style Transfer is a mostly AIB technique where you take a photograph, put a painting next"}, {"start": 9.84, "end": 13.72, "text": " to it, and it applies the style of the painting to our photo."}, {"start": 13.72, "end": 19.8, "text": " A key insight of this new work is that a style is complex, and it can only be approximated"}, {"start": 19.8, "end": 21.080000000000002, "text": " with one image."}, {"start": 21.080000000000002, "end": 25.48, "text": " One image is just one instance of a style, not the style itself."}, {"start": 25.48, "end": 26.48, "text": " Have a look here."}, {"start": 26.48, "end": 31.96, "text": " If we take this content image and use Van Gogh's road with Cypress and star painting as"}, {"start": 31.96, "end": 34.16, "text": " the art style, we get this."}, {"start": 34.16, "end": 39.72, "text": " However, if we would have used Starry Night instead, it would have resulted in this."}, {"start": 39.72, "end": 45.0, "text": " This is not learning about the style, this is learning a specific instance of a style."}, {"start": 45.0, "end": 49.88, "text": " Here you see two previous algorithms that were instead trained on a collection of works"}, {"start": 49.88, "end": 51.08, "text": " from Van Gogh."}, {"start": 51.08, "end": 55.32, "text": " However, you see that they are a little blurry and lack detail."}, {"start": 55.32, "end": 58.56, "text": " This new technique is able to address this really well."}, {"start": 58.56, "end": 64.64, "text": " Also, look at how convincingly it stylized the top silhouettes of the bell tower."}, {"start": 64.64, "end": 71.16, "text": " It can also deal with HD videos at a reasonable speed of 9 of these images per second."}, {"start": 71.16, "end": 73.0, "text": " Very tasty, love it."}, {"start": 73.0, "end": 77.68, "text": " And of course, a style transfer is a rapidly growing field that are ample comparisons in"}, {"start": 77.68, "end": 80.52, "text": " the paper against other competing techniques."}, {"start": 80.52, "end": 82.44, "text": " The results are very convincing."}, {"start": 82.44, "end": 87.92, "text": " I feel that in most cases, it represents the art style really well and can decide where"}, {"start": 87.92, "end": 93.16, "text": " to leave the image content similar to the input and where to apply the style so the overall"}, {"start": 93.16, "end": 95.67999999999999, "text": " outlook of the image remains similar."}, {"start": 95.67999999999999, "end": 100.24, "text": " So we can look at these results and discuss who likes which one all day long."}, {"start": 100.24, "end": 105.03999999999999, "text": " But there are also other, more objective ways of evaluating such an algorithm."}, {"start": 105.03999999999999, "end": 110.24, "text": " What is really cool is that the technique was tested by human art history experts and"}, {"start": 110.24, "end": 114.8, "text": " they not only found this method to be the most convincing of all the other style transfer"}, {"start": 114.8, "end": 121.44, "text": " methods, but also thought that the AI produced paintings were from an artist 39% of the"}, {"start": 121.44, "end": 122.44, "text": " time."}, {"start": 122.44, "end": 127.75999999999999, "text": " So this means that the algorithm is able to learn the essence of an artistic style from"}, {"start": 127.75999999999999, "end": 129.44, "text": " a collection of images."}, {"start": 129.44, "end": 131.44, "text": " This is a hugely forward."}, {"start": 131.44, "end": 136.32, "text": " Make sure to have a look at the paper that also describes a new style-aware loss function"}, {"start": 136.32, "end": 140.04, "text": " and differences in the training process of this method as well."}, {"start": 140.04, "end": 144.64, "text": " And if you enjoyed this episode and would like to see more, please help us exist through"}, {"start": 144.64, "end": 145.64, "text": " Patreon."}, {"start": 145.64, "end": 150.2, "text": " In this website, you can support the series and pick up cool perks like early access to"}, {"start": 150.2, "end": 154.88, "text": " these videos, deciding the order of future episodes and more."}, {"start": 154.88, "end": 159.72, "text": " You know the drill, the dollar a month is almost nothing, but it keeps the papers coming."}, {"start": 159.72, "end": 163.92, "text": " We also support cryptocurrencies, you'll find more information about this in the video"}, {"start": 163.92, "end": 164.92, "text": " description."}, {"start": 164.92, "end": 166.92, "text": " Thanks for watching and for your generous support."}, {"start": 166.92, "end": 170.76, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=0xlbzCXJpLM
Should an AI Learn Like Humans?
The paper "Investigating Human Priors for Playing Video Games" is available here: https://rach0012.github.io/humanRL_website/ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Esa Turkulainen, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-1428428/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifehir. This paper reveals us a fundamental difference between how humans and machines learn. You are given a video game with no instructions, you start playing it, and the only information you get is a line of text when you successfully finish the game. That's it. So far so good, this is relatively easy to play because the visual cues are quite clear. The ping blob looks like an adversary, and what the spikes do is also self-experimentary. This is easy to understand so we can finish the game in less than a minute. Easy. Now, let's play this. Woah! What is happening? Even empty space looks like as if it were a solid tile. I am not sure if I can finish this version of the game, at least not in a minute for sure. So, what is happening here is that some of the artwork of the objects has been masked out. As a result, this version of the game is much harder to play for humans. So far, this is hardly surprising, and if that would be it, this wouldn't have been a very scientific experiment. However, this is not the case. So to proceed from this point, we will try to find what makes humans learn so efficiently, that not by changing everything at once, but by trying to change and measure only one variable at a time. So how about this version of the game? This is still manageable since the environment remains the same, only the objects we interact with have been masked. Through trial and error, we can find out the mechanics of the game. What about reversing the semantics? Spikes now become tasty ice cream, and the shiny gold conceals an enemy that eats us. Very apt, I have to say. Again, with this, the problem suddenly became more difficult for humans as we need some trial and error to find out the rules. After putting together several other masking strategies, they measured the amount of time, the number of deaths, and interactions that were required to finish the level. I will draw your attention mainly to the blue lines which show which variable cause how much degradation to the performance of humans. The main piece of insight is not only that these different visual cues throw off humans, but it tells us variable by variable and also by how much. An important insight here is that highlighting important objects and visual consistency are key. So, what about the machines? How are learning algorithms affected? These are the baseline results. Adding mass semantics? Barely an issue. Mass object identities? This sounds quite hard, right? Barely an issue. Mass platforms and letters, barely an issue. This is a remarkable property of learning algorithms as they don't only think in terms of visual cues, but in terms of mathematics and probabilities. Removing similarity information throws the machines off a bit, which is understandable because the same objects may appear as if they were completely different. There is more analysis on this and the paper, so make sure to have a look. So, what are the conclusions here? Ideas are remarkably good at reusing knowledge and reading and understanding visual cues. However, if the visual cues become more cryptic, their performance drastically decreases. When machines start playing the game, at first they have no idea which character they control, how gravity works or how to defeat enemies, or that keys are required to open doors. However, they learn these tricky problems and games much easier and quicker because these mind-bending changes, as you remember, are barely an issue. Note that you can play the original and the obfuscated versions on the author's website as well. Also note that we really have only scratched the surface here, the paper contains a lot more insights. So, it is the perfect time to nourish your mind with a paper, make sure to click it in the video description and give it a read. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifehir."}, {"start": 4.6000000000000005, "end": 10.32, "text": " This paper reveals us a fundamental difference between how humans and machines learn."}, {"start": 10.32, "end": 15.68, "text": " You are given a video game with no instructions, you start playing it, and the only information"}, {"start": 15.68, "end": 19.68, "text": " you get is a line of text when you successfully finish the game."}, {"start": 19.68, "end": 20.92, "text": " That's it."}, {"start": 20.92, "end": 26.560000000000002, "text": " So far so good, this is relatively easy to play because the visual cues are quite clear."}, {"start": 26.56, "end": 32.64, "text": " The ping blob looks like an adversary, and what the spikes do is also self-experimentary."}, {"start": 32.64, "end": 36.68, "text": " This is easy to understand so we can finish the game in less than a minute."}, {"start": 36.68, "end": 37.68, "text": " Easy."}, {"start": 37.68, "end": 39.8, "text": " Now, let's play this."}, {"start": 39.8, "end": 41.32, "text": " Woah!"}, {"start": 41.32, "end": 42.8, "text": " What is happening?"}, {"start": 42.8, "end": 46.519999999999996, "text": " Even empty space looks like as if it were a solid tile."}, {"start": 46.519999999999996, "end": 51.92, "text": " I am not sure if I can finish this version of the game, at least not in a minute for sure."}, {"start": 51.92, "end": 57.72, "text": " So, what is happening here is that some of the artwork of the objects has been masked out."}, {"start": 57.72, "end": 62.32, "text": " As a result, this version of the game is much harder to play for humans."}, {"start": 62.32, "end": 66.68, "text": " So far, this is hardly surprising, and if that would be it, this wouldn't have been"}, {"start": 66.68, "end": 68.68, "text": " a very scientific experiment."}, {"start": 68.68, "end": 70.8, "text": " However, this is not the case."}, {"start": 70.8, "end": 76.32000000000001, "text": " So to proceed from this point, we will try to find what makes humans learn so efficiently,"}, {"start": 76.32, "end": 82.32, "text": " that not by changing everything at once, but by trying to change and measure only one variable"}, {"start": 82.32, "end": 83.39999999999999, "text": " at a time."}, {"start": 83.39999999999999, "end": 85.91999999999999, "text": " So how about this version of the game?"}, {"start": 85.91999999999999, "end": 90.91999999999999, "text": " This is still manageable since the environment remains the same, only the objects we interact"}, {"start": 90.91999999999999, "end": 92.88, "text": " with have been masked."}, {"start": 92.88, "end": 96.8, "text": " Through trial and error, we can find out the mechanics of the game."}, {"start": 96.8, "end": 99.39999999999999, "text": " What about reversing the semantics?"}, {"start": 99.39999999999999, "end": 105.88, "text": " Spikes now become tasty ice cream, and the shiny gold conceals an enemy that eats us."}, {"start": 105.88, "end": 107.52, "text": " Very apt, I have to say."}, {"start": 107.52, "end": 112.67999999999999, "text": " Again, with this, the problem suddenly became more difficult for humans as we need some trial"}, {"start": 112.67999999999999, "end": 114.92, "text": " and error to find out the rules."}, {"start": 114.92, "end": 120.19999999999999, "text": " After putting together several other masking strategies, they measured the amount of time,"}, {"start": 120.19999999999999, "end": 124.47999999999999, "text": " the number of deaths, and interactions that were required to finish the level."}, {"start": 124.47999999999999, "end": 129.72, "text": " I will draw your attention mainly to the blue lines which show which variable cause"}, {"start": 129.72, "end": 133.0, "text": " how much degradation to the performance of humans."}, {"start": 133.0, "end": 137.88, "text": " The main piece of insight is not only that these different visual cues throw off humans,"}, {"start": 137.88, "end": 142.44, "text": " but it tells us variable by variable and also by how much."}, {"start": 142.44, "end": 147.92, "text": " An important insight here is that highlighting important objects and visual consistency are"}, {"start": 147.92, "end": 148.92, "text": " key."}, {"start": 148.92, "end": 151.24, "text": " So, what about the machines?"}, {"start": 151.24, "end": 153.88, "text": " How are learning algorithms affected?"}, {"start": 153.88, "end": 155.72, "text": " These are the baseline results."}, {"start": 155.72, "end": 157.44, "text": " Adding mass semantics?"}, {"start": 157.44, "end": 158.8, "text": " Barely an issue."}, {"start": 158.8, "end": 160.52, "text": " Mass object identities?"}, {"start": 160.52, "end": 162.68, "text": " This sounds quite hard, right?"}, {"start": 162.68, "end": 164.24, "text": " Barely an issue."}, {"start": 164.24, "end": 167.48000000000002, "text": " Mass platforms and letters, barely an issue."}, {"start": 167.48000000000002, "end": 172.08, "text": " This is a remarkable property of learning algorithms as they don't only think in terms"}, {"start": 172.08, "end": 176.64000000000001, "text": " of visual cues, but in terms of mathematics and probabilities."}, {"start": 176.64000000000001, "end": 181.44, "text": " Removing similarity information throws the machines off a bit, which is understandable"}, {"start": 181.44, "end": 185.68, "text": " because the same objects may appear as if they were completely different."}, {"start": 185.68, "end": 189.52, "text": " There is more analysis on this and the paper, so make sure to have a look."}, {"start": 189.52, "end": 192.24, "text": " So, what are the conclusions here?"}, {"start": 192.24, "end": 197.92000000000002, "text": " Ideas are remarkably good at reusing knowledge and reading and understanding visual cues."}, {"start": 197.92000000000002, "end": 203.4, "text": " However, if the visual cues become more cryptic, their performance drastically decreases."}, {"start": 203.4, "end": 208.92000000000002, "text": " When machines start playing the game, at first they have no idea which character they control,"}, {"start": 208.92000000000002, "end": 214.72, "text": " how gravity works or how to defeat enemies, or that keys are required to open doors."}, {"start": 214.72, "end": 220.24, "text": " However, they learn these tricky problems and games much easier and quicker because"}, {"start": 220.24, "end": 224.84, "text": " these mind-bending changes, as you remember, are barely an issue."}, {"start": 224.84, "end": 229.0, "text": " Note that you can play the original and the obfuscated versions on the author's website"}, {"start": 229.0, "end": 230.0, "text": " as well."}, {"start": 230.0, "end": 234.60000000000002, "text": " Also note that we really have only scratched the surface here, the paper contains a lot"}, {"start": 234.60000000000002, "end": 235.60000000000002, "text": " more insights."}, {"start": 235.60000000000002, "end": 240.24, "text": " So, it is the perfect time to nourish your mind with a paper, make sure to click it in"}, {"start": 240.24, "end": 242.36, "text": " the video description and give it a read."}, {"start": 242.36, "end": 252.36, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wR2OlsF1CEY
DeepMind's New AI Diagnoses Eye Conditions
The paper "Clinically applicable deep learning for diagnosis and referral in retinal disease" is available here: https://deepmind.com/blog/moorfields-major-milestone/ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Esa Turkulainen, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-691269/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir. In this video series, we often see how these amazing new machine learning algorithms can make our lives easier, and fortunately, some of them are also useful for serious medical applications. Specifically, medical imaging. Medical imaging is commonly used in most healthcare systems, where an image of a chosen set of organs and tissues is made for a doctor to look at and decide whether medical intervention is required. The main issue is that the amount of diagnostic images out there in the wild increases at a staggering pace, and it makes it more and more infeasible for doctors to look at. But wait a minute, as more and more images are created, this also means that we have more training data for machine learning algorithms, so at the same time as doctors get more and more swamped, the AI should get better and better over time. These methods can process orders of magnitude more of these images than humans, and after that, the final decision is put back into the hands of the doctor who can now focus more on the edge cases and prioritize which patients should be seen immediately. This work from scientists at DeepMind was trained on about 14,000 optical coherence tomography scans. This is the OCT label that you see on the left. These images are cross sections of the human retina. We first start out with this OCT scan, then the manual segmentation step follows where a doctor marks up this image to show where the most relevant parts, like the retinal fluids or the elevations of retinal pigments are. Before we proceed, let's stop here for a moment and look at some images of how the network can learn from the doctors and reproduce the segmentations by itself. Look at that, it's almost pixel perfect. This looks like science fiction. Now that we have the segmentation map, it is time to perform classification. This means that we look at this map and assign a probability to each possible condition that may be present. Finally, based on these, a final verdict is made whether the patient needs to be urgently seen or just a routine check or perhaps no check is required. The algorithm also learns this classification step and creates these verdicts itself. And of course, the question naturally arises. How accurate is this? Well, let's look at the confusion matrices. The confusion matrix shows us how many of the urgent cases were correctly classified as urgent and how often it was misclassified as something else and what that something else was. The same analysis is performed to all other classes. Here's how the retinospatialist doctors did and here is how the AI did. I'll leave it here for a few seconds for you to inspect it. Really good. Here's also a different way of aggregating this data. The algorithm did significantly better than all of the optometrist and matched the performance of the number one retinospatialist. I wouldn't believe any of these results if I didn't see these reports with my own eyes in the paper. An additional advantage of this technique is that it works on different kinds of imaging devices and it is among the first methods that works with 3D data. Another plus that I really liked is that this was developed as a close collaboration with the top tier eye hospital in London to make sure that the results are as practical as possible. The paper contains a ton of more information so make sure to have a look. This was a herculean effort from the side of deep mind and the results are truly staggering. What a time to be alive. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir."}, {"start": 4.36, "end": 9.32, "text": " In this video series, we often see how these amazing new machine learning algorithms can"}, {"start": 9.32, "end": 15.36, "text": " make our lives easier, and fortunately, some of them are also useful for serious medical"}, {"start": 15.36, "end": 16.76, "text": " applications."}, {"start": 16.76, "end": 19.56, "text": " Specifically, medical imaging."}, {"start": 19.56, "end": 25.2, "text": " Medical imaging is commonly used in most healthcare systems, where an image of a chosen set of organs"}, {"start": 25.2, "end": 32.16, "text": " and tissues is made for a doctor to look at and decide whether medical intervention is required."}, {"start": 32.16, "end": 37.44, "text": " The main issue is that the amount of diagnostic images out there in the wild increases at"}, {"start": 37.44, "end": 42.6, "text": " a staggering pace, and it makes it more and more infeasible for doctors to look at."}, {"start": 42.6, "end": 47.519999999999996, "text": " But wait a minute, as more and more images are created, this also means that we have more"}, {"start": 47.519999999999996, "end": 53.32, "text": " training data for machine learning algorithms, so at the same time as doctors get more and"}, {"start": 53.32, "end": 58.28, "text": " more swamped, the AI should get better and better over time."}, {"start": 58.28, "end": 63.2, "text": " These methods can process orders of magnitude more of these images than humans, and after"}, {"start": 63.2, "end": 68.48, "text": " that, the final decision is put back into the hands of the doctor who can now focus more"}, {"start": 68.48, "end": 73.68, "text": " on the edge cases and prioritize which patients should be seen immediately."}, {"start": 73.68, "end": 80.03999999999999, "text": " This work from scientists at DeepMind was trained on about 14,000 optical coherence tomography"}, {"start": 80.03999999999999, "end": 81.03999999999999, "text": " scans."}, {"start": 81.04, "end": 84.08000000000001, "text": " This is the OCT label that you see on the left."}, {"start": 84.08000000000001, "end": 87.16000000000001, "text": " These images are cross sections of the human retina."}, {"start": 87.16000000000001, "end": 93.16000000000001, "text": " We first start out with this OCT scan, then the manual segmentation step follows where"}, {"start": 93.16000000000001, "end": 99.68, "text": " a doctor marks up this image to show where the most relevant parts, like the retinal fluids"}, {"start": 99.68, "end": 102.80000000000001, "text": " or the elevations of retinal pigments are."}, {"start": 102.80000000000001, "end": 107.60000000000001, "text": " Before we proceed, let's stop here for a moment and look at some images of how the network"}, {"start": 107.6, "end": 115.52, "text": " can learn from the doctors and reproduce the segmentations by itself."}, {"start": 115.52, "end": 118.64, "text": " Look at that, it's almost pixel perfect."}, {"start": 118.64, "end": 121.0, "text": " This looks like science fiction."}, {"start": 121.0, "end": 125.52, "text": " Now that we have the segmentation map, it is time to perform classification."}, {"start": 125.52, "end": 130.72, "text": " This means that we look at this map and assign a probability to each possible condition"}, {"start": 130.72, "end": 132.2, "text": " that may be present."}, {"start": 132.2, "end": 137.56, "text": " Finally, based on these, a final verdict is made whether the patient needs to be urgently"}, {"start": 137.56, "end": 142.72, "text": " seen or just a routine check or perhaps no check is required."}, {"start": 142.72, "end": 148.24, "text": " The algorithm also learns this classification step and creates these verdicts itself."}, {"start": 148.24, "end": 151.32, "text": " And of course, the question naturally arises."}, {"start": 151.32, "end": 152.96, "text": " How accurate is this?"}, {"start": 152.96, "end": 155.72, "text": " Well, let's look at the confusion matrices."}, {"start": 155.72, "end": 160.76, "text": " The confusion matrix shows us how many of the urgent cases were correctly classified"}, {"start": 160.76, "end": 166.6, "text": " as urgent and how often it was misclassified as something else and what that something"}, {"start": 166.6, "end": 167.79999999999998, "text": " else was."}, {"start": 167.79999999999998, "end": 171.72, "text": " The same analysis is performed to all other classes."}, {"start": 171.72, "end": 176.72, "text": " Here's how the retinospatialist doctors did and here is how the AI did."}, {"start": 176.72, "end": 180.2, "text": " I'll leave it here for a few seconds for you to inspect it."}, {"start": 180.2, "end": 184.44, "text": " Really good."}, {"start": 184.44, "end": 187.95999999999998, "text": " Here's also a different way of aggregating this data."}, {"start": 187.95999999999998, "end": 193.56, "text": " The algorithm did significantly better than all of the optometrist and matched the performance"}, {"start": 193.56, "end": 196.44, "text": " of the number one retinospatialist."}, {"start": 196.44, "end": 201.28, "text": " I wouldn't believe any of these results if I didn't see these reports with my own eyes"}, {"start": 201.28, "end": 202.52, "text": " in the paper."}, {"start": 202.52, "end": 207.0, "text": " An additional advantage of this technique is that it works on different kinds of imaging"}, {"start": 207.0, "end": 212.4, "text": " devices and it is among the first methods that works with 3D data."}, {"start": 212.4, "end": 216.8, "text": " Another plus that I really liked is that this was developed as a close collaboration with"}, {"start": 216.8, "end": 221.96, "text": " the top tier eye hospital in London to make sure that the results are as practical as"}, {"start": 221.96, "end": 223.12, "text": " possible."}, {"start": 223.12, "end": 227.44, "text": " The paper contains a ton of more information so make sure to have a look."}, {"start": 227.44, "end": 233.24, "text": " This was a herculean effort from the side of deep mind and the results are truly staggering."}, {"start": 233.24, "end": 234.72, "text": " What a time to be alive."}, {"start": 234.72, "end": 262.16, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=GRQuRcpf5Gc
AI-Based Video-to-Video Synthesis
The paper "Video-to-Video Synthesis" and its source code is available here: https://tcwang0509.github.io/vid2vid/ https://github.com/NVIDIA/vid2vid Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Esa Turkulainen, Evan Breznyik, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. Do you remember the amazing pics to Pixagorytham from last year? It was able to perform image translation, which means that it could take a daytime image and translate it into a nighttime image, create maps from satellite images, or create photorealistic shoes from crude drawings. I remember that I almost fell off the chair when I first seen the results. But this new algorithm takes it up a notch and transforms these edge maps into human faces, not only that, but it also animates them in time. As you see here, it also takes into consideration the fact that the same edges may result in many different faces, and therefore it is also willing to give us more of these options. If I fell out of the chair for the still image version, I don't really know what the appropriate reaction would be to this. It can also take a crude map of labels where each color corresponds to one object class, such as roads, cars or buildings, and it follows how our labels evolve in time and creates an animation out of it. We can also change the meaning of our labels easily, for instance in the lower left, you see how the buildings are now suddenly transformed to trees. Or we can also change the trees to become buildings. Do you remember motion transfer from a couple of videos ago? It can do a similar variant of that too, and even synthesizes the shadows around the character in a reasonably correct manner. As you see, the temporal coherence of this technique is second to none, which means that it remembers what it did with past images and doesn't do anything drastically different for the next frame, and therefore generates smoother videos. This is very apparent, especially when juxtaposed with the previous pixel-picks method. So, there are three key differences from the previous technique to achieve this. One, the original architecture uses a generator network to create images, where there is also a separate discriminator network that judges its work and teaches it to do better. Instead, this work uses two discriminator neural networks, one checks whether the images look good one by one, and one more discriminator for overlooking whether the sequence of these images would pass as a video. This discriminator cracks down on the generator network if it creates sequences that are not temporalic adherent, and this is why we have minimal flickering in the output videos. Fantastic idea! Two, to ease the training process, it also does it progressively, which means that the network is first faced with an easier version of the problem that progressively gets harder over time. If you have a look at the paper, you will see that the training is both progressive in terms of space and time. I love this idea too! Three, it also uses a flow map that describes the changes that took place since the previous frame. Note that this previous picks to pick SagaRytham was published in 2017 a little more than a year ago. I think that is a good taste of the pace of progress in machine learning research. Up to 2K resolution, 30 seconds of video, and the source code is also available. Congratulations, Fox! This paper is something else! Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.36, "end": 8.66, "text": " Do you remember the amazing pics to Pixagorytham from last year?"}, {"start": 8.66, "end": 14.5, "text": " It was able to perform image translation, which means that it could take a daytime image"}, {"start": 14.5, "end": 20.14, "text": " and translate it into a nighttime image, create maps from satellite images,"}, {"start": 20.14, "end": 24.34, "text": " or create photorealistic shoes from crude drawings."}, {"start": 24.34, "end": 28.740000000000002, "text": " I remember that I almost fell off the chair when I first seen the results."}, {"start": 28.74, "end": 35.56, "text": " But this new algorithm takes it up a notch and transforms these edge maps into human faces,"}, {"start": 35.56, "end": 39.379999999999995, "text": " not only that, but it also animates them in time."}, {"start": 39.379999999999995, "end": 46.480000000000004, "text": " As you see here, it also takes into consideration the fact that the same edges may result in many different faces,"}, {"start": 46.480000000000004, "end": 50.28, "text": " and therefore it is also willing to give us more of these options."}, {"start": 50.28, "end": 57.12, "text": " If I fell out of the chair for the still image version, I don't really know what the appropriate reaction would be to this."}, {"start": 57.12, "end": 63.72, "text": " It can also take a crude map of labels where each color corresponds to one object class,"}, {"start": 63.72, "end": 71.75999999999999, "text": " such as roads, cars or buildings, and it follows how our labels evolve in time and creates an animation out of it."}, {"start": 71.75999999999999, "end": 81.56, "text": " We can also change the meaning of our labels easily, for instance in the lower left, you see how the buildings are now suddenly transformed to trees."}, {"start": 81.56, "end": 89.24000000000001, "text": " Or we can also change the trees to become buildings."}, {"start": 89.24000000000001, "end": 92.76, "text": " Do you remember motion transfer from a couple of videos ago?"}, {"start": 92.76, "end": 101.04, "text": " It can do a similar variant of that too, and even synthesizes the shadows around the character in a reasonably correct manner."}, {"start": 101.04, "end": 105.24000000000001, "text": " As you see, the temporal coherence of this technique is second to none,"}, {"start": 105.24, "end": 112.24, "text": " which means that it remembers what it did with past images and doesn't do anything drastically different for the next frame,"}, {"start": 112.24, "end": 115.03999999999999, "text": " and therefore generates smoother videos."}, {"start": 115.03999999999999, "end": 120.47999999999999, "text": " This is very apparent, especially when juxtaposed with the previous pixel-picks method."}, {"start": 120.47999999999999, "end": 125.56, "text": " So, there are three key differences from the previous technique to achieve this."}, {"start": 125.56, "end": 130.28, "text": " One, the original architecture uses a generator network to create images,"}, {"start": 130.28, "end": 137.08, "text": " where there is also a separate discriminator network that judges its work and teaches it to do better."}, {"start": 137.08, "end": 141.04, "text": " Instead, this work uses two discriminator neural networks,"}, {"start": 141.04, "end": 144.76, "text": " one checks whether the images look good one by one,"}, {"start": 144.76, "end": 151.52, "text": " and one more discriminator for overlooking whether the sequence of these images would pass as a video."}, {"start": 151.52, "end": 158.56, "text": " This discriminator cracks down on the generator network if it creates sequences that are not temporalic adherent,"}, {"start": 158.56, "end": 162.28, "text": " and this is why we have minimal flickering in the output videos."}, {"start": 162.28, "end": 164.12, "text": " Fantastic idea!"}, {"start": 164.12, "end": 168.48, "text": " Two, to ease the training process, it also does it progressively,"}, {"start": 168.48, "end": 176.04, "text": " which means that the network is first faced with an easier version of the problem that progressively gets harder over time."}, {"start": 176.04, "end": 182.92000000000002, "text": " If you have a look at the paper, you will see that the training is both progressive in terms of space and time."}, {"start": 182.92000000000002, "end": 184.92000000000002, "text": " I love this idea too!"}, {"start": 184.92, "end": 191.44, "text": " Three, it also uses a flow map that describes the changes that took place since the previous frame."}, {"start": 191.44, "end": 198.39999999999998, "text": " Note that this previous picks to pick SagaRytham was published in 2017 a little more than a year ago."}, {"start": 198.39999999999998, "end": 203.32, "text": " I think that is a good taste of the pace of progress in machine learning research."}, {"start": 203.32, "end": 209.11999999999998, "text": " Up to 2K resolution, 30 seconds of video, and the source code is also available."}, {"start": 209.11999999999998, "end": 212.56, "text": " Congratulations, Fox! This paper is something else!"}, {"start": 212.56, "end": 216.92000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=HvH0b9K_Iro
This AI Performs Super Resolution in Less Than a Second
The paper "A Fully Progressive Approach to Single-Image Super-Resolution" is available here: http://igl.ethz.ch/projects/prosr/ A-Man's Caustic scene: http://www.luxrender.net/forum/gallery2.php?g2_itemId=27260 Corresponding paper with Vlad Miller's spheres scene: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Esa Turkulainen, Evan Breznyik, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-1822544/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute paper sweet caro e genre I fahir. When looking for illustrations for a presentation, most of the time I quickly find an appropriate photo on the internet, however many of these photos are really low resolution. This often creates a weird situation where I have to think, okay, do I use the splotch ear, lower resolution image that gets the point across, or take a high resolution crisp image that is less educational. In case you're wondering, I encounter this problem for almost every single video I make for this channel. As you can surely tell, I am waiting for the day when super resolution becomes mainstream. Super resolution means that we have a low resolution image that lacks details and we feed it to a computer program which hallucinates all the details onto it, creating a crisp, high resolution image. This way I could take my highly relevant blurry image, improve it, and use it in my videos. As adding details to images clearly requires a deep understanding of what is shown in these images, our season fellow scholars immediately know that learning based algorithms will be ideal for this task. While we are looking at some amazing results with this new technique, let's talk about the two key differences that this method introduces. One, it takes a fully progressive approach which means that we don't immediately produce the highest resolution output we are looking for, but slowly leapfrog our way through intermediate steps, each of which is only slightly higher resolution than the input. This means that the final output is produced over several steps where each problem is only a tiny bit harder than the previous one. This is often referred to as curriculum learning and it not only increases the quality of the solution, but is also easier to train as solving each intermediate step is only a little harder than the previous one. It is a bit like how students learn in school. First, the students are shown some easy introductory tasks to get a grasp of a problem and slowly work their way towards mastering a field by solving problems that gradually increase in difficulty. Two, now we can start playing with the thought of using a generative adversarial network. We talk a lot about this architecture in this series. At this time, I will only note that training these is fraught with difficulties, so every bit of help we can get is more than welcome, so the role of curriculum learning is to help easing this process. Note that this research field is well explored and has a remarkable number of papers, so I was expecting a lot of comparisons against competing techniques. And when looking at the paper and the supplementary materials, boy, did I get it. Make sure to have a look at the paper, it contains a very exhaustive validation section, which reveals that if we measure the error of the solution in terms of human perception, it is only slightly lower quality than the best technique. However, this one is five times quicker, offering a really nice balance between quality and performance. So what about the actual numbers for the execution time? For instance, up sampling an image to increase its resolution to twice its original size takes less than a second, and we can go up to even eight times the original resolution, which also only takes four and a half seconds. The quality and the execution times indicate that we are again one step closer to mainstream super resolution. What a time to be alive. The source code of this project is also available. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.74, "text": " Dear Fellow Scholars, this is two-minute paper sweet caro e genre I fahir."}, {"start": 4.74, "end": 9.84, "text": " When looking for illustrations for a presentation, most of the time I quickly find an appropriate"}, {"start": 9.84, "end": 15.540000000000001, "text": " photo on the internet, however many of these photos are really low resolution."}, {"start": 15.540000000000001, "end": 20.78, "text": " This often creates a weird situation where I have to think, okay, do I use the splotch"}, {"start": 20.78, "end": 27.14, "text": " ear, lower resolution image that gets the point across, or take a high resolution crisp"}, {"start": 27.14, "end": 29.580000000000002, "text": " image that is less educational."}, {"start": 29.58, "end": 34.22, "text": " In case you're wondering, I encounter this problem for almost every single video I make"}, {"start": 34.22, "end": 35.42, "text": " for this channel."}, {"start": 35.42, "end": 40.76, "text": " As you can surely tell, I am waiting for the day when super resolution becomes mainstream."}, {"start": 40.76, "end": 45.7, "text": " Super resolution means that we have a low resolution image that lacks details and we feed"}, {"start": 45.7, "end": 51.739999999999995, "text": " it to a computer program which hallucinates all the details onto it, creating a crisp,"}, {"start": 51.739999999999995, "end": 53.379999999999995, "text": " high resolution image."}, {"start": 53.38, "end": 59.82, "text": " This way I could take my highly relevant blurry image, improve it, and use it in my videos."}, {"start": 59.82, "end": 64.82000000000001, "text": " As adding details to images clearly requires a deep understanding of what is shown in these"}, {"start": 64.82000000000001, "end": 70.30000000000001, "text": " images, our season fellow scholars immediately know that learning based algorithms will be"}, {"start": 70.30000000000001, "end": 72.26, "text": " ideal for this task."}, {"start": 72.26, "end": 76.34, "text": " While we are looking at some amazing results with this new technique, let's talk about"}, {"start": 76.34, "end": 80.06, "text": " the two key differences that this method introduces."}, {"start": 80.06, "end": 84.78, "text": " One, it takes a fully progressive approach which means that we don't immediately produce"}, {"start": 84.78, "end": 91.26, "text": " the highest resolution output we are looking for, but slowly leapfrog our way through intermediate"}, {"start": 91.26, "end": 96.5, "text": " steps, each of which is only slightly higher resolution than the input."}, {"start": 96.5, "end": 101.98, "text": " This means that the final output is produced over several steps where each problem is only"}, {"start": 101.98, "end": 104.62, "text": " a tiny bit harder than the previous one."}, {"start": 104.62, "end": 109.46000000000001, "text": " This is often referred to as curriculum learning and it not only increases the quality of the"}, {"start": 109.46, "end": 116.94, "text": " solution, but is also easier to train as solving each intermediate step is only a little harder"}, {"start": 116.94, "end": 118.33999999999999, "text": " than the previous one."}, {"start": 118.33999999999999, "end": 120.74, "text": " It is a bit like how students learn in school."}, {"start": 120.74, "end": 126.94, "text": " First, the students are shown some easy introductory tasks to get a grasp of a problem and slowly"}, {"start": 126.94, "end": 132.38, "text": " work their way towards mastering a field by solving problems that gradually increase in"}, {"start": 132.38, "end": 133.38, "text": " difficulty."}, {"start": 133.38, "end": 139.38, "text": " Two, now we can start playing with the thought of using a generative adversarial network."}, {"start": 139.38, "end": 142.66, "text": " We talk a lot about this architecture in this series."}, {"start": 142.66, "end": 147.9, "text": " At this time, I will only note that training these is fraught with difficulties, so every"}, {"start": 147.9, "end": 153.54, "text": " bit of help we can get is more than welcome, so the role of curriculum learning is to help"}, {"start": 153.54, "end": 155.46, "text": " easing this process."}, {"start": 155.46, "end": 160.66, "text": " Note that this research field is well explored and has a remarkable number of papers, so I"}, {"start": 160.66, "end": 164.57999999999998, "text": " was expecting a lot of comparisons against competing techniques."}, {"start": 164.58, "end": 169.86, "text": " And when looking at the paper and the supplementary materials, boy, did I get it."}, {"start": 169.86, "end": 174.58, "text": " Make sure to have a look at the paper, it contains a very exhaustive validation section,"}, {"start": 174.58, "end": 180.18, "text": " which reveals that if we measure the error of the solution in terms of human perception,"}, {"start": 180.18, "end": 183.58, "text": " it is only slightly lower quality than the best technique."}, {"start": 183.58, "end": 189.78, "text": " However, this one is five times quicker, offering a really nice balance between quality"}, {"start": 189.78, "end": 191.38000000000002, "text": " and performance."}, {"start": 191.38, "end": 195.26, "text": " So what about the actual numbers for the execution time?"}, {"start": 195.26, "end": 200.57999999999998, "text": " For instance, up sampling an image to increase its resolution to twice its original size"}, {"start": 200.57999999999998, "end": 207.06, "text": " takes less than a second, and we can go up to even eight times the original resolution,"}, {"start": 207.06, "end": 210.22, "text": " which also only takes four and a half seconds."}, {"start": 210.22, "end": 215.94, "text": " The quality and the execution times indicate that we are again one step closer to mainstream"}, {"start": 215.94, "end": 217.54, "text": " super resolution."}, {"start": 217.54, "end": 219.01999999999998, "text": " What a time to be alive."}, {"start": 219.02, "end": 221.82000000000002, "text": " The source code of this project is also available."}, {"start": 221.82, "end": 251.38, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cEBgi6QYDhQ
Everybody Dance Now! - AI-Based Motion Transfer
Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers The paper "Everybody Dance Now" is available here: https://arxiv.org/abs/1808.07371 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-2122473/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Do you remember Star Transfer? Star Transfer is a mostly AIB's technique where we take a photograph, put a painting next to it, and it applies the style of the painting to our photo. That was amazing. Also, do you remember pose estimation? This is a problem where we have a photograph or a video of someone and the output is a skeleton that shows the current posture of this person. So, how about something that combines the power of pose estimation with the expressiveness of Star Transfer? For instance, this way we could take a video of a professional dancer, then record a video of our own, let's say moderately beautiful moves, and then transfer the dancer's performance onto our own body in the video. Let's call it motion transfer. Have a look at these results. How cool is that? As you see, these output videos are quite smooth and this is not by accident. It doesn't just come out like that. With this technique, tempero coherence is taken into consideration. This means that the algorithm knows what it has done a moment ago and will not do something wildly different, making these dance motions smooth and believable. This method uses a generative adversarial network where we have a neural network for pose estimation or in other words, generating the skeleton from an image and a generator network to create new footage when given a test subject and a new skeleton posture. These two neural networks battle each other and teach each other to distinguish and create more and more authentic footage over time. Some artifacts are still there, but note that this is among the first papers on this problem and it is already doing incredibly well. This is fresh and experimental. Just the way I like it. Two follow up papers down the line and will be worried that we can barely tell the difference from authentic footage. Make sure to have a look at the paper where you will see how the pics to pics algorithm was also used for image generation and there is a nice evaluation section as well. And now, let the age of AI-based dance videos begin. If you enjoy this episode, please consider supporting us on Patreon where you can pick up really cool perks like early access to these videos, voting on the order of future episodes and more. We are available at patreon.com slash 2 minute papers or just click the link in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.28, "end": 6.28, "text": " Do you remember Star Transfer?"}, {"start": 6.28, "end": 12.44, "text": " Star Transfer is a mostly AIB's technique where we take a photograph, put a painting next to it,"}, {"start": 12.44, "end": 15.76, "text": " and it applies the style of the painting to our photo."}, {"start": 15.76, "end": 17.52, "text": " That was amazing."}, {"start": 17.52, "end": 20.44, "text": " Also, do you remember pose estimation?"}, {"start": 20.44, "end": 26.28, "text": " This is a problem where we have a photograph or a video of someone and the output is a skeleton"}, {"start": 26.28, "end": 29.240000000000002, "text": " that shows the current posture of this person."}, {"start": 29.24, "end": 34.879999999999995, "text": " So, how about something that combines the power of pose estimation with the expressiveness"}, {"start": 34.879999999999995, "end": 36.519999999999996, "text": " of Star Transfer?"}, {"start": 36.519999999999996, "end": 41.48, "text": " For instance, this way we could take a video of a professional dancer, then record"}, {"start": 41.48, "end": 47.56, "text": " a video of our own, let's say moderately beautiful moves, and then transfer the dancer's"}, {"start": 47.56, "end": 51.08, "text": " performance onto our own body in the video."}, {"start": 51.08, "end": 53.08, "text": " Let's call it motion transfer."}, {"start": 53.08, "end": 55.32, "text": " Have a look at these results."}, {"start": 55.32, "end": 57.239999999999995, "text": " How cool is that?"}, {"start": 57.24, "end": 62.6, "text": " As you see, these output videos are quite smooth and this is not by accident."}, {"start": 62.6, "end": 64.56, "text": " It doesn't just come out like that."}, {"start": 64.56, "end": 68.36, "text": " With this technique, tempero coherence is taken into consideration."}, {"start": 68.36, "end": 73.04, "text": " This means that the algorithm knows what it has done a moment ago and will not do something"}, {"start": 73.04, "end": 77.88, "text": " wildly different, making these dance motions smooth and believable."}, {"start": 77.88, "end": 83.08, "text": " This method uses a generative adversarial network where we have a neural network for pose"}, {"start": 83.08, "end": 89.03999999999999, "text": " estimation or in other words, generating the skeleton from an image and a generator network"}, {"start": 89.03999999999999, "end": 93.96, "text": " to create new footage when given a test subject and a new skeleton posture."}, {"start": 93.96, "end": 99.2, "text": " These two neural networks battle each other and teach each other to distinguish and create"}, {"start": 99.2, "end": 102.24, "text": " more and more authentic footage over time."}, {"start": 102.24, "end": 107.12, "text": " Some artifacts are still there, but note that this is among the first papers on this problem"}, {"start": 107.12, "end": 110.16, "text": " and it is already doing incredibly well."}, {"start": 110.16, "end": 112.6, "text": " This is fresh and experimental."}, {"start": 112.6, "end": 113.83999999999999, "text": " Just the way I like it."}, {"start": 113.83999999999999, "end": 118.39999999999999, "text": " Two follow up papers down the line and will be worried that we can barely tell the difference"}, {"start": 118.39999999999999, "end": 120.11999999999999, "text": " from authentic footage."}, {"start": 120.11999999999999, "end": 124.24, "text": " Make sure to have a look at the paper where you will see how the pics to pics algorithm"}, {"start": 124.24, "end": 129.2, "text": " was also used for image generation and there is a nice evaluation section as well."}, {"start": 129.2, "end": 133.35999999999999, "text": " And now, let the age of AI-based dance videos begin."}, {"start": 133.35999999999999, "end": 137.64, "text": " If you enjoy this episode, please consider supporting us on Patreon where you can pick"}, {"start": 137.64, "end": 143.6, "text": " up really cool perks like early access to these videos, voting on the order of future episodes"}, {"start": 143.6, "end": 144.6, "text": " and more."}, {"start": 144.6, "end": 150.2, "text": " We are available at patreon.com slash 2 minute papers or just click the link in the video"}, {"start": 150.2, "end": 151.2, "text": " description."}, {"start": 151.2, "end": 177.95999999999998, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Mnu1DzFzRWs
This Neural Network Animates Quadrupeds
The paper "Mode-Adaptive Neural Networks for Quadruped Motion Control" is available here: http://homepages.inf.ed.ac.uk/tkomura/dog.pdf Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-142173/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. If we have an animation movie or a computer game with quadrupeds and we are yearning for really high quality, life-like animations, motion capture is often the go-to tool for that. Motion capture means that we put an actor, in our case a dog in the studio, we ask it to perform sitting, trotting, pacing and jumping, record its motion and transfer it onto our virtual character. This generally works quite well, however, there are many difficulties with this process. We will skip over the fact that an artist or engineer has to clean and label the recorded data, which is quite labor intensive, but there is a bigger problem. We have all these individual motion types at our disposal, however, a virtual character will also need to be able to transition between these motions in a smooth and natural manner. Saving all possible transitions between these moves is not feasible, so in an earlier work we looked at a neural network-based technique to try to weave these motions together. For the first sight, this looks great, however, have a look at these weird sliding motions that it produces. Do you see them? They look quite unnatural. This new method tries to address this problem but ends up offering much, much more than that. This requires only one hour of motion capture data and we have only around 30 seconds of footage for jumping motions, which is basically next to nothing. And this technique can deal with unstructured data, meaning that it doesn't require manual labeling of the individual motion types, which saves a ton of work hours. Beyond that, as we control this character in the game, this technique also uses a prediction network to guess the next motion type and the gating network that helps blending together these different motion types. Both of these units are neural networks. On the right, you see the results with the new method compared to a standard neural network-based solution on the left. Make sure to pay special attention to the foot-sliding issues with the solution on the left and note that the new method doesn't produce any of those. Now, these motions look great, but they all take place on a flat surface. You see here that this new technique excels at much more challenging landscapes as well. This technique is a total powerhouse, and I can only imagine how many work hours this will save for artists working in the industry. It is also scientifically interesting and quite practical, my favorite combination. It is also well-evaluated, so make sure to have a look at the paper for more details. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.38, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.38, "end": 9.200000000000001, "text": " If we have an animation movie or a computer game with quadrupeds and we are yearning for"}, {"start": 9.200000000000001, "end": 15.88, "text": " really high quality, life-like animations, motion capture is often the go-to tool for that."}, {"start": 15.88, "end": 21.32, "text": " Motion capture means that we put an actor, in our case a dog in the studio, we ask it"}, {"start": 21.32, "end": 27.28, "text": " to perform sitting, trotting, pacing and jumping, record its motion and transfer it onto"}, {"start": 27.28, "end": 29.080000000000002, "text": " our virtual character."}, {"start": 29.08, "end": 34.239999999999995, "text": " This generally works quite well, however, there are many difficulties with this process."}, {"start": 34.239999999999995, "end": 39.519999999999996, "text": " We will skip over the fact that an artist or engineer has to clean and label the recorded"}, {"start": 39.519999999999996, "end": 44.2, "text": " data, which is quite labor intensive, but there is a bigger problem."}, {"start": 44.2, "end": 49.64, "text": " We have all these individual motion types at our disposal, however, a virtual character"}, {"start": 49.64, "end": 56.12, "text": " will also need to be able to transition between these motions in a smooth and natural manner."}, {"start": 56.12, "end": 61.239999999999995, "text": " Saving all possible transitions between these moves is not feasible, so in an earlier work"}, {"start": 61.239999999999995, "end": 66.8, "text": " we looked at a neural network-based technique to try to weave these motions together."}, {"start": 66.8, "end": 71.96, "text": " For the first sight, this looks great, however, have a look at these weird sliding motions"}, {"start": 71.96, "end": 73.64, "text": " that it produces."}, {"start": 73.64, "end": 74.75999999999999, "text": " Do you see them?"}, {"start": 74.75999999999999, "end": 76.6, "text": " They look quite unnatural."}, {"start": 76.6, "end": 81.6, "text": " This new method tries to address this problem but ends up offering much, much more than"}, {"start": 81.6, "end": 82.6, "text": " that."}, {"start": 82.6, "end": 87.8, "text": " This requires only one hour of motion capture data and we have only around 30 seconds of"}, {"start": 87.8, "end": 92.36, "text": " footage for jumping motions, which is basically next to nothing."}, {"start": 92.36, "end": 97.36, "text": " And this technique can deal with unstructured data, meaning that it doesn't require manual"}, {"start": 97.36, "end": 103.08, "text": " labeling of the individual motion types, which saves a ton of work hours."}, {"start": 103.08, "end": 107.75999999999999, "text": " Beyond that, as we control this character in the game, this technique also uses a prediction"}, {"start": 107.76, "end": 113.36, "text": " network to guess the next motion type and the gating network that helps blending together"}, {"start": 113.36, "end": 115.28, "text": " these different motion types."}, {"start": 115.28, "end": 117.64, "text": " Both of these units are neural networks."}, {"start": 117.64, "end": 122.08000000000001, "text": " On the right, you see the results with the new method compared to a standard neural network"}, {"start": 122.08000000000001, "end": 124.16, "text": "-based solution on the left."}, {"start": 124.16, "end": 128.12, "text": " Make sure to pay special attention to the foot-sliding issues with the solution on the"}, {"start": 128.12, "end": 132.32, "text": " left and note that the new method doesn't produce any of those."}, {"start": 132.32, "end": 137.68, "text": " Now, these motions look great, but they all take place on a flat surface."}, {"start": 137.68, "end": 142.92000000000002, "text": " You see here that this new technique excels at much more challenging landscapes as well."}, {"start": 142.92000000000002, "end": 147.52, "text": " This technique is a total powerhouse, and I can only imagine how many work hours this"}, {"start": 147.52, "end": 150.28, "text": " will save for artists working in the industry."}, {"start": 150.28, "end": 156.28, "text": " It is also scientifically interesting and quite practical, my favorite combination."}, {"start": 156.28, "end": 160.76000000000002, "text": " It is also well-evaluated, so make sure to have a look at the paper for more details."}, {"start": 160.76, "end": 167.76, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=P0fMwA3X5KI
NVIDIA's Image Restoration AI: Almost Perfect!
The paper "Noise2Noise: Learning Image Restoration without Clean Data" and its source code are available here: 1. https://arxiv.org/abs/1803.04189 2. https://github.com/NVlabs/noise2noise 3. https://news.developer.nvidia.com/ai-can-now-fix-your-grainy-photos-by-only-looking-at-grainy-photos/ Have a look at this too, some materials are now available for download! - https://developer.nvidia.com/rtx/ngx Unofficial implementation: https://github.com/yu4u/noise2noise Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-226279/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Image denoising is an area where we have a noisy image as an input and we wish to get a clear noise-free image. Neural network-based solutions are amazing at this because we can feed them a large amount of training data with noisy inputs and clear outputs. And if we do that, during the training process, the neural network will be able to learn the concept of noise and when presented with a new, previously unseen noisy image, it will be able to clear it up. However, with light transport simulations, creating a noisy image means following the path of millions and millions of light rays, which can take up to hours per training sample. And we need thousands or potentially hundreds of thousands of these. There are also other cases where creating the clean images for the training set is not just expensive, but flat out impossible. Low light photography, astronomical imaging, or magnetic resonance imaging, MRI in short, are great examples of this. In these cases, we cannot use our neural networks simply because we cannot build such a training set as we don't have access to the clear images. In this collaboration between NVIDIA, Alto University and MIT, scientists came up with an insane idea. Let's try to train a neural network without clear images and use only noisy data. Normally, we would say that this is clearly impossible and end this research project. However, they show that under a suitable set of constraints, for instance, one reasonable assumption about the distribution of the noise opens up the possibility of restoring noisy signals without seeing clean ones. This is an insane idea that actually works and can help us restore images with significant outlier content. Not only that, but it is also shown that this technique can do close to or just as well as other previously known techniques that have access to clean images. You can look at these images, many of which have many different kinds of noise, like camera noise, noise from light transport simulations, MRI imaging, and images severely corrupted with a ton of random text. The usual limitations apply, in short, it of course cannot possibly recover content if we cut out a bigger region from our images. This severely hamstrung training process can be compared to a regular neural denoiser that has access to the clean images and the differences are negligible most of the time. So how about that? We can teach a neural network to denoise without ever showing it the concept of denoising. Just the thought of this boggles my mind so much it keeps me up at night. This is such a remarkable concept. I hope there will soon be follow-up papers that extend this idea to other problems as well. If you enjoyed this episode and you feel that about 8 of these videos a month is worth a dollar, please consider supporting us on Patreon. We use these funds to make better videos for you, and a small portion is also used to fund research conferences. You can find us at patreon.com slash 2-minute papers and there is also a link to it in the video description. Do not the drill, one dollar is almost nothing but it keeps the papers coming. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 9.28, "text": " Image denoising is an area where we have a noisy image as an input and we wish to get a"}, {"start": 9.28, "end": 11.48, "text": " clear noise-free image."}, {"start": 11.48, "end": 16.32, "text": " Neural network-based solutions are amazing at this because we can feed them a large amount"}, {"start": 16.32, "end": 20.0, "text": " of training data with noisy inputs and clear outputs."}, {"start": 20.0, "end": 24.400000000000002, "text": " And if we do that, during the training process, the neural network will be able to learn"}, {"start": 24.4, "end": 30.0, "text": " the concept of noise and when presented with a new, previously unseen noisy image, it"}, {"start": 30.0, "end": 31.96, "text": " will be able to clear it up."}, {"start": 31.96, "end": 37.04, "text": " However, with light transport simulations, creating a noisy image means following the"}, {"start": 37.04, "end": 42.4, "text": " path of millions and millions of light rays, which can take up to hours per training"}, {"start": 42.4, "end": 43.4, "text": " sample."}, {"start": 43.4, "end": 47.44, "text": " And we need thousands or potentially hundreds of thousands of these."}, {"start": 47.44, "end": 51.64, "text": " There are also other cases where creating the clean images for the training set is not"}, {"start": 51.64, "end": 55.08, "text": " just expensive, but flat out impossible."}, {"start": 55.08, "end": 61.0, "text": " Low light photography, astronomical imaging, or magnetic resonance imaging, MRI in short,"}, {"start": 61.0, "end": 62.92, "text": " are great examples of this."}, {"start": 62.92, "end": 67.76, "text": " In these cases, we cannot use our neural networks simply because we cannot build such a training"}, {"start": 67.76, "end": 71.04, "text": " set as we don't have access to the clear images."}, {"start": 71.04, "end": 76.76, "text": " In this collaboration between NVIDIA, Alto University and MIT, scientists came up with"}, {"start": 76.76, "end": 78.8, "text": " an insane idea."}, {"start": 78.8, "end": 84.03999999999999, "text": " Let's try to train a neural network without clear images and use only noisy data."}, {"start": 84.03999999999999, "end": 89.28, "text": " Normally, we would say that this is clearly impossible and end this research project."}, {"start": 89.28, "end": 94.28, "text": " However, they show that under a suitable set of constraints, for instance, one reasonable"}, {"start": 94.28, "end": 99.56, "text": " assumption about the distribution of the noise opens up the possibility of restoring noisy"}, {"start": 99.56, "end": 102.8, "text": " signals without seeing clean ones."}, {"start": 102.8, "end": 108.47999999999999, "text": " This is an insane idea that actually works and can help us restore images with significant"}, {"start": 108.48, "end": 110.32000000000001, "text": " outlier content."}, {"start": 110.32000000000001, "end": 115.80000000000001, "text": " Not only that, but it is also shown that this technique can do close to or just as well"}, {"start": 115.80000000000001, "end": 119.96000000000001, "text": " as other previously known techniques that have access to clean images."}, {"start": 119.96000000000001, "end": 124.60000000000001, "text": " You can look at these images, many of which have many different kinds of noise, like camera"}, {"start": 124.60000000000001, "end": 130.92000000000002, "text": " noise, noise from light transport simulations, MRI imaging, and images severely corrupted"}, {"start": 130.92000000000002, "end": 133.12, "text": " with a ton of random text."}, {"start": 133.12, "end": 138.4, "text": " The usual limitations apply, in short, it of course cannot possibly recover content"}, {"start": 138.4, "end": 141.24, "text": " if we cut out a bigger region from our images."}, {"start": 141.24, "end": 146.16, "text": " This severely hamstrung training process can be compared to a regular neural denoiser"}, {"start": 146.16, "end": 151.4, "text": " that has access to the clean images and the differences are negligible most of the time."}, {"start": 151.4, "end": 152.88, "text": " So how about that?"}, {"start": 152.88, "end": 158.52, "text": " We can teach a neural network to denoise without ever showing it the concept of denoising."}, {"start": 158.52, "end": 163.04000000000002, "text": " Just the thought of this boggles my mind so much it keeps me up at night."}, {"start": 163.04000000000002, "end": 165.32, "text": " This is such a remarkable concept."}, {"start": 165.32, "end": 169.68, "text": " I hope there will soon be follow-up papers that extend this idea to other problems as"}, {"start": 169.68, "end": 170.68, "text": " well."}, {"start": 170.68, "end": 175.04, "text": " If you enjoyed this episode and you feel that about 8 of these videos a month is worth"}, {"start": 175.04, "end": 178.51999999999998, "text": " a dollar, please consider supporting us on Patreon."}, {"start": 178.51999999999998, "end": 183.04, "text": " We use these funds to make better videos for you, and a small portion is also used to"}, {"start": 183.04, "end": 184.76, "text": " fund research conferences."}, {"start": 184.76, "end": 190.2, "text": " You can find us at patreon.com slash 2-minute papers and there is also a link to it in the"}, {"start": 190.2, "end": 191.48, "text": " video description."}, {"start": 191.48, "end": 195.95999999999998, "text": " Do not the drill, one dollar is almost nothing but it keeps the papers coming."}, {"start": 195.96, "end": 224.48000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=LBezOcnNJ68
NVIDIA's AI Makes Amazing Slow-Mo Videos! 🚘
The paper "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation" is available here: https://people.cs.umass.edu/~hzjiang//projects/superslomo/ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers Have a look at this too, some materials are now available for download! - https://developer.nvidia.com/rtx/ngx We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-848903/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kato Jona Ifehir. How about some slow motion videos? If we would like to create a slow motion video and we don't own an expensive slow-mo camera, we can try to shoot a normal video and simply slow it down. This sounds good on paper, however, the more we slow it down, the more space we have between our individual frames and at some point our video will feel more like a slideshow. To get around this problem, in a previous video, we discussed two basic techniques to fill in these missing frames. One was a naive technique called frame blending that basically computes the average of two images. In most cases, this doesn't help all that much because it doesn't have an understanding of the motion that takes place in the video. The other one was optical flow. Now this one is much smarter as it tries to estimate the kind of translation and rotational motions that take place in the video and they typically do much better. However, the disadvantage of this is that it usually takes forever to compute and it often introduces visual artifacts. So now we are going to have a look at NVIDIA's results and the main points of interest are always around the silhouettes of moving objects, especially around regions where the foreground and the background meet. Keep an eye out for these regions throughout this video. For instance, here is one example I found. Let me know in the comments section if you have found more. This technique builds on UNET, a superfast convolutional neural network architecture that was originally used to segment biomedical images from limited training data. This neural network was trained on a bit over a thousand videos and computes multiple approximate optical flows and combines them in a way that tries to minimize artifacts. As you see in this side-by-side comparisons, it works amazingly well. Some artifacts still remain but are often hard to catch. And this architecture is blazing fast. Not real-time yet, but creating a few tens of these additional frames takes only a few seconds. The quality of the results is also evaluated and compared to other works in the paper make sure to have a look. As the current commercially available tools are super slow and take forever, I cannot wait to be able to use this technique to make some more amazing slow motion footage for you fellow scholars. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.68, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Jona Ifehir."}, {"start": 4.68, "end": 7.0200000000000005, "text": " How about some slow motion videos?"}, {"start": 7.0200000000000005, "end": 11.88, "text": " If we would like to create a slow motion video and we don't own an expensive slow-mo camera,"}, {"start": 11.88, "end": 16.36, "text": " we can try to shoot a normal video and simply slow it down."}, {"start": 16.36, "end": 21.62, "text": " This sounds good on paper, however, the more we slow it down, the more space we have between"}, {"start": 21.62, "end": 27.080000000000002, "text": " our individual frames and at some point our video will feel more like a slideshow."}, {"start": 27.08, "end": 31.22, "text": " To get around this problem, in a previous video, we discussed two basic techniques to"}, {"start": 31.22, "end": 33.36, "text": " fill in these missing frames."}, {"start": 33.36, "end": 38.5, "text": " One was a naive technique called frame blending that basically computes the average of two"}, {"start": 38.5, "end": 39.5, "text": " images."}, {"start": 39.5, "end": 43.599999999999994, "text": " In most cases, this doesn't help all that much because it doesn't have an understanding"}, {"start": 43.599999999999994, "end": 46.14, "text": " of the motion that takes place in the video."}, {"start": 46.14, "end": 48.239999999999995, "text": " The other one was optical flow."}, {"start": 48.239999999999995, "end": 53.72, "text": " Now this one is much smarter as it tries to estimate the kind of translation and rotational"}, {"start": 53.72, "end": 58.16, "text": " motions that take place in the video and they typically do much better."}, {"start": 58.16, "end": 63.8, "text": " However, the disadvantage of this is that it usually takes forever to compute and it often"}, {"start": 63.8, "end": 66.28, "text": " introduces visual artifacts."}, {"start": 66.28, "end": 71.12, "text": " So now we are going to have a look at NVIDIA's results and the main points of interest are"}, {"start": 71.12, "end": 76.72, "text": " always around the silhouettes of moving objects, especially around regions where the foreground"}, {"start": 76.72, "end": 78.16, "text": " and the background meet."}, {"start": 78.16, "end": 86.24, "text": " Keep an eye out for these regions throughout this video."}, {"start": 86.24, "end": 88.8, "text": " For instance, here is one example I found."}, {"start": 88.8, "end": 91.6, "text": " Let me know in the comments section if you have found more."}, {"start": 91.6, "end": 96.72, "text": " This technique builds on UNET, a superfast convolutional neural network architecture that"}, {"start": 96.72, "end": 102.19999999999999, "text": " was originally used to segment biomedical images from limited training data."}, {"start": 102.19999999999999, "end": 107.47999999999999, "text": " This neural network was trained on a bit over a thousand videos and computes multiple"}, {"start": 107.48, "end": 113.24000000000001, "text": " approximate optical flows and combines them in a way that tries to minimize artifacts."}, {"start": 113.24000000000001, "end": 117.84, "text": " As you see in this side-by-side comparisons, it works amazingly well."}, {"start": 117.84, "end": 121.4, "text": " Some artifacts still remain but are often hard to catch."}, {"start": 121.4, "end": 124.16, "text": " And this architecture is blazing fast."}, {"start": 124.16, "end": 129.24, "text": " Not real-time yet, but creating a few tens of these additional frames takes only a few"}, {"start": 129.24, "end": 130.24, "text": " seconds."}, {"start": 130.24, "end": 135.2, "text": " The quality of the results is also evaluated and compared to other works in the paper"}, {"start": 135.2, "end": 136.68, "text": " make sure to have a look."}, {"start": 136.68, "end": 142.4, "text": " As the current commercially available tools are super slow and take forever, I cannot wait"}, {"start": 142.4, "end": 146.96, "text": " to be able to use this technique to make some more amazing slow motion footage for you"}, {"start": 146.96, "end": 148.12, "text": " fellow scholars."}, {"start": 148.12, "end": 175.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eSaShQbUJTQ
DeepMind's AI Takes An IQ Test! 🤖
The paper "Measuring abstract reasoning in neural networks" is available here: http://proceedings.mlr.press/v80/santoro18a/santoro18a.pdf Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-1867751/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Kato Ejona Ifahir. Throughout this series, we have seen many impressive applications of artificial intelligence. These techniques are capable of learning the piano from the masters of the past, beat formidable teams in complex games like Dota 2, perform well in the game Sonic the Hedgehog, or help us revive and impersonate famous actors who are not with us anymore. However, what is often not spoken about is how narrow or how general these AI programs are. A narrow AI means an agent that can perform one task really well but cannot perform other potentially easier tasks. The Holy Grail of Machine Learning Research is a general AI that is capable of obtaining new knowledge by itself through abstract reasoning. This is similar to how humans learn and is a critical step in obtaining a general AI, and to tackle this problem, scientists at DeepMind created a program that is able to generate a large amount of problems that test abstract reasoning capabilities. They are inspired by human IQ tests with all these questions about sizes, colors, and progressions. They design the training process in a way that the algorithm is given training data on the progression of colors but it is never shown similar progression examples that involve object sizes. The concept is the same but the visual expression of the progression is different. A human easily understands the difference but teaching abstract reasoning like this to a computer sounds almost impossible. However, now we have a tool that can create many of these questions and the correct answers to them. And I will note that some of these are not as easy as many people would expect. For instance, a vertical number progression is very easy to spot but have a good look at these ones. That's so immediately apparent, right? Going back to being able to generate lots and lots of data, the black belt fellow scholars know exactly what this means. This means that we can train a neural network to perform this task. Unfortunately, existing techniques and architectures perform quite poorly. Despite the fact that we have a ton of training data, they could only get 22 to 42% of the answers right. However, these networks are amazing at doing other things like writing novels or image classification. Therefore this means that their generalization capabilities are not too great when we go outside their core domain. This new technique goes by the name Wild Relations Network and is trained in a way that encourages reasoning. It is also designed in a way that it not only outputs a guess for the results but also tries to provide a reason for it which interestingly further improved the accuracy of the network. And what is this accuracy we are talking about? It finds the correct solution 62.6% of the time. But it gets better because this result was measured in the presence of distractor objects like these annoying lines and circles. This is quite confusing even for humans. So a result about 60% is quite remarkable. And it gets even better because if we don't use these distractions it is correct 78% of the time. Wow! This is indeed a step towards teaching an AI how to reason and as the authors made this dataset publicly available for everyone, I expect a reasonable amount of research works appearing in this area in the near future. Who knows, perhaps even in the next few months. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Kato Ejona Ifahir."}, {"start": 4.32, "end": 9.24, "text": " Throughout this series, we have seen many impressive applications of artificial intelligence."}, {"start": 9.24, "end": 13.96, "text": " These techniques are capable of learning the piano from the masters of the past, beat"}, {"start": 13.96, "end": 20.76, "text": " formidable teams in complex games like Dota 2, perform well in the game Sonic the Hedgehog,"}, {"start": 20.76, "end": 25.76, "text": " or help us revive and impersonate famous actors who are not with us anymore."}, {"start": 25.76, "end": 31.400000000000002, "text": " However, what is often not spoken about is how narrow or how general these AI programs"}, {"start": 31.400000000000002, "end": 32.400000000000006, "text": " are."}, {"start": 32.400000000000006, "end": 38.400000000000006, "text": " A narrow AI means an agent that can perform one task really well but cannot perform"}, {"start": 38.400000000000006, "end": 41.160000000000004, "text": " other potentially easier tasks."}, {"start": 41.160000000000004, "end": 45.96, "text": " The Holy Grail of Machine Learning Research is a general AI that is capable of obtaining"}, {"start": 45.96, "end": 49.68000000000001, "text": " new knowledge by itself through abstract reasoning."}, {"start": 49.68000000000001, "end": 55.160000000000004, "text": " This is similar to how humans learn and is a critical step in obtaining a general AI,"}, {"start": 55.16, "end": 60.44, "text": " and to tackle this problem, scientists at DeepMind created a program that is able to generate"}, {"start": 60.44, "end": 65.03999999999999, "text": " a large amount of problems that test abstract reasoning capabilities."}, {"start": 65.03999999999999, "end": 70.92, "text": " They are inspired by human IQ tests with all these questions about sizes, colors, and"}, {"start": 70.92, "end": 71.92, "text": " progressions."}, {"start": 71.92, "end": 76.6, "text": " They design the training process in a way that the algorithm is given training data"}, {"start": 76.6, "end": 81.28, "text": " on the progression of colors but it is never shown similar progression examples that"}, {"start": 81.28, "end": 83.19999999999999, "text": " involve object sizes."}, {"start": 83.2, "end": 87.8, "text": " The concept is the same but the visual expression of the progression is different."}, {"start": 87.8, "end": 92.68, "text": " A human easily understands the difference but teaching abstract reasoning like this to"}, {"start": 92.68, "end": 95.52000000000001, "text": " a computer sounds almost impossible."}, {"start": 95.52000000000001, "end": 100.4, "text": " However, now we have a tool that can create many of these questions and the correct answers"}, {"start": 100.4, "end": 101.4, "text": " to them."}, {"start": 101.4, "end": 105.84, "text": " And I will note that some of these are not as easy as many people would expect."}, {"start": 105.84, "end": 110.52000000000001, "text": " For instance, a vertical number progression is very easy to spot but have a good look"}, {"start": 110.52000000000001, "end": 112.68, "text": " at these ones."}, {"start": 112.68, "end": 115.44000000000001, "text": " That's so immediately apparent, right?"}, {"start": 115.44000000000001, "end": 120.08000000000001, "text": " Going back to being able to generate lots and lots of data, the black belt fellow scholars"}, {"start": 120.08000000000001, "end": 122.56, "text": " know exactly what this means."}, {"start": 122.56, "end": 126.4, "text": " This means that we can train a neural network to perform this task."}, {"start": 126.4, "end": 131.48000000000002, "text": " Unfortunately, existing techniques and architectures perform quite poorly."}, {"start": 131.48000000000002, "end": 138.20000000000002, "text": " Despite the fact that we have a ton of training data, they could only get 22 to 42% of the"}, {"start": 138.20000000000002, "end": 139.20000000000002, "text": " answers right."}, {"start": 139.2, "end": 146.28, "text": " However, these networks are amazing at doing other things like writing novels or image classification."}, {"start": 146.28, "end": 151.32, "text": " Therefore this means that their generalization capabilities are not too great when we go outside"}, {"start": 151.32, "end": 152.83999999999997, "text": " their core domain."}, {"start": 152.83999999999997, "end": 158.92, "text": " This new technique goes by the name Wild Relations Network and is trained in a way that encourages"}, {"start": 158.92, "end": 160.23999999999998, "text": " reasoning."}, {"start": 160.23999999999998, "end": 165.07999999999998, "text": " It is also designed in a way that it not only outputs a guess for the results but also"}, {"start": 165.08, "end": 171.24, "text": " tries to provide a reason for it which interestingly further improved the accuracy of the network."}, {"start": 171.24, "end": 174.08, "text": " And what is this accuracy we are talking about?"}, {"start": 174.08, "end": 178.68, "text": " It finds the correct solution 62.6% of the time."}, {"start": 178.68, "end": 184.04000000000002, "text": " But it gets better because this result was measured in the presence of distractor objects"}, {"start": 184.04000000000002, "end": 186.60000000000002, "text": " like these annoying lines and circles."}, {"start": 186.60000000000002, "end": 189.4, "text": " This is quite confusing even for humans."}, {"start": 189.4, "end": 193.4, "text": " So a result about 60% is quite remarkable."}, {"start": 193.4, "end": 199.0, "text": " And it gets even better because if we don't use these distractions it is correct 78% of"}, {"start": 199.0, "end": 200.0, "text": " the time."}, {"start": 200.0, "end": 201.20000000000002, "text": " Wow!"}, {"start": 201.20000000000002, "end": 206.08, "text": " This is indeed a step towards teaching an AI how to reason and as the authors made this"}, {"start": 206.08, "end": 211.0, "text": " dataset publicly available for everyone, I expect a reasonable amount of research works"}, {"start": 211.0, "end": 213.72, "text": " appearing in this area in the near future."}, {"start": 213.72, "end": 216.44, "text": " Who knows, perhaps even in the next few months."}, {"start": 216.44, "end": 223.44, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MvFABFWPBrw
DeepMind Has A Superhuman Level Quake 3 AI Team! 🚀
Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers The paper "Human-level performance in first-person multiplayer games with population-based deep reinforcement learning" and its corresponding blog post are available here: 1. https://arxiv.org/abs/1807.01281 2. https://deepmind.com/blog/capture-the-flag/ Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepMind #Quake #Quake3
Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifahir. After having a look at OpenAI's effort to master the Dota 2 game, of course, we all know that scientists at DeepMind are also hard at work on an AI that beats the Capture the Flag Game Mode in Quake 3. Quake 3 Arena is an iconic first-person shooter game and Capture the Flag is a fun game mode where each team tries to take the other team's flag and carry it to their own base while protecting their own. This game mode requires good aiming skills, map presence, reading the opponents well, and tons of strategy, a nightmare situation for any kind of AI. Not only that, but in this version, the map changes from game to game, therefore the AI has to learn general concepts and be able to pull them off in a variety of different, previously unseen conditions. This doesn't seem to be within the realm of possibilities to pull off. The minimaps here always show the location of the players, each are color coded to blue or red to indicate their teams. Much like humans, these AI agents learned by looking at the video output of the game and have never been told anything about the game or what the rules are. These scientists at DeepMind ran a tournament with 40 human players who were matched up against these agents randomly, both as opponents and teammates. In this tournament, a team of average human players had a win probability of 43% where a team of strong players won slightly more than half 52% of their games. And now hold on to your papers because the agents were able to win 74% of their games. So the difference between the average and the strong human players' win rate is 9%. And the difference between the strongest humans and the AI is more than twice that margin 22%. This is insanity. And as you see, it barely matters what the size or the layout of the map is or how many teammates there are, the AI's win rate is always remarkably high. These agents showcase many human-like behaviors such as staying at their own base to defend it, camping within the opponent's base or following teammates. This builds on a new architecture by the name, for the win, FTW in short, good workfox, instead of training one agent, it uses a population of agents that train and evolve from each other to make sure that the diverse set of playstyles are discovered. This uses recurrent neural networks. These are neural network variants that are able to learn and produce sequences of data. Here, two of these are used, a fast and a slow one that operate on different time scales, but share a memory module. This means that one of them has a very accurate look at the near past and the other one has a more coarse look that can look back more into the past in return. If these two work together correctly, decisions can be made that are both good locally at this point in time and globally to maximize the probability of winning the whole game. This is really huge because this algorithm can perform long-term planning, which is one of the key reasons why many difficult games and tasks remain unsolved. Well, as it seems now, not for long. An additional challenge is that the game score is not necessarily subject to maximization like in most games, but there is a mapping from the scores into an internal reward, which means that the algorithm has to be able to predict its own progress towards winning. And note that even though Quake III and Capture the Flag is an excellent way to demonstrate the capabilities of this algorithm, this architecture can be generalized to other problems. I am going to give you a few more tidbits that I have found super interesting, but before, if you are enjoying this episode and would like to pick up some cool perks like early access, ending the topic of future episodes or getting your name listed in the video description as a key supporter, why not support the show on Patreon. With this, you can also help us make better videos in the future. You can find us at patreon.com slash 2 minute papers and we also support Bitcoin and other crypto currencies. The addresses are available in the video description. And now, onwards to the cool tidbits. A human plus agent team has been able to defeat an agent plus agent team 5% of the time, indicating that these AIs are able to coordinate and play together with anyone they are given. I get goosebumps from this. Love it. The reaction time and accuracy of the agents is better than that of humans, but not nearly perfect as many people would think. However, they outclass humans even if we artificially reduce their accuracy and reaction times. In another experiment, two agents were paired up against two professional game tester humans who could freely communicate and train against the same agents for 12 hours to see if they can learn their patterns and force them to make mistakes. Even with this, humans had only 125% of these games. Given the other numbers we have, it is very likely that this unfair advantage made no difference whatsoever. How about that? If there are any more questions, make sure to have a look at the paper that describes every possible tidbit you can possibly imagine. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifahir."}, {"start": 4.48, "end": 9.200000000000001, "text": " After having a look at OpenAI's effort to master the Dota 2 game, of course, we all know"}, {"start": 9.200000000000001, "end": 14.4, "text": " that scientists at DeepMind are also hard at work on an AI that beats the Capture the"}, {"start": 14.4, "end": 16.8, "text": " Flag Game Mode in Quake 3."}, {"start": 16.8, "end": 21.96, "text": " Quake 3 Arena is an iconic first-person shooter game and Capture the Flag is a fun game"}, {"start": 21.96, "end": 26.72, "text": " mode where each team tries to take the other team's flag and carry it to their own base"}, {"start": 26.72, "end": 28.44, "text": " while protecting their own."}, {"start": 28.44, "end": 33.480000000000004, "text": " This game mode requires good aiming skills, map presence, reading the opponents well,"}, {"start": 33.480000000000004, "end": 37.84, "text": " and tons of strategy, a nightmare situation for any kind of AI."}, {"start": 37.84, "end": 42.6, "text": " Not only that, but in this version, the map changes from game to game, therefore the AI"}, {"start": 42.6, "end": 47.8, "text": " has to learn general concepts and be able to pull them off in a variety of different,"}, {"start": 47.8, "end": 49.56, "text": " previously unseen conditions."}, {"start": 49.56, "end": 53.16, "text": " This doesn't seem to be within the realm of possibilities to pull off."}, {"start": 53.16, "end": 57.120000000000005, "text": " The minimaps here always show the location of the players, each are color coded to"}, {"start": 57.12, "end": 59.64, "text": " blue or red to indicate their teams."}, {"start": 59.64, "end": 64.88, "text": " Much like humans, these AI agents learned by looking at the video output of the game and"}, {"start": 64.88, "end": 68.92, "text": " have never been told anything about the game or what the rules are."}, {"start": 68.92, "end": 74.03999999999999, "text": " These scientists at DeepMind ran a tournament with 40 human players who were matched up against"}, {"start": 74.03999999999999, "end": 78.28, "text": " these agents randomly, both as opponents and teammates."}, {"start": 78.28, "end": 84.12, "text": " In this tournament, a team of average human players had a win probability of 43% where"}, {"start": 84.12, "end": 90.24000000000001, "text": " a team of strong players won slightly more than half 52% of their games."}, {"start": 90.24000000000001, "end": 96.96000000000001, "text": " And now hold on to your papers because the agents were able to win 74% of their games."}, {"start": 96.96000000000001, "end": 102.28, "text": " So the difference between the average and the strong human players' win rate is 9%."}, {"start": 102.28, "end": 107.56, "text": " And the difference between the strongest humans and the AI is more than twice that margin"}, {"start": 107.56, "end": 109.36000000000001, "text": " 22%."}, {"start": 109.36000000000001, "end": 111.12, "text": " This is insanity."}, {"start": 111.12, "end": 115.60000000000001, "text": " And as you see, it barely matters what the size or the layout of the map is or how many"}, {"start": 115.60000000000001, "end": 119.92, "text": " teammates there are, the AI's win rate is always remarkably high."}, {"start": 119.92, "end": 124.60000000000001, "text": " These agents showcase many human-like behaviors such as staying at their own base to defend"}, {"start": 124.60000000000001, "end": 129.28, "text": " it, camping within the opponent's base or following teammates."}, {"start": 129.28, "end": 135.48000000000002, "text": " This builds on a new architecture by the name, for the win, FTW in short, good workfox,"}, {"start": 135.48000000000002, "end": 140.6, "text": " instead of training one agent, it uses a population of agents that train and evolve from each"}, {"start": 140.6, "end": 145.04, "text": " other to make sure that the diverse set of playstyles are discovered."}, {"start": 145.04, "end": 147.24, "text": " This uses recurrent neural networks."}, {"start": 147.24, "end": 152.07999999999998, "text": " These are neural network variants that are able to learn and produce sequences of data."}, {"start": 152.07999999999998, "end": 157.84, "text": " Here, two of these are used, a fast and a slow one that operate on different time scales,"}, {"start": 157.84, "end": 159.95999999999998, "text": " but share a memory module."}, {"start": 159.95999999999998, "end": 164.6, "text": " This means that one of them has a very accurate look at the near past and the other one has"}, {"start": 164.6, "end": 168.64, "text": " a more coarse look that can look back more into the past in return."}, {"start": 168.64, "end": 173.76, "text": " If these two work together correctly, decisions can be made that are both good locally at this"}, {"start": 173.76, "end": 178.95999999999998, "text": " point in time and globally to maximize the probability of winning the whole game."}, {"start": 178.95999999999998, "end": 183.64, "text": " This is really huge because this algorithm can perform long-term planning, which is one"}, {"start": 183.64, "end": 187.83999999999997, "text": " of the key reasons why many difficult games and tasks remain unsolved."}, {"start": 187.83999999999997, "end": 190.64, "text": " Well, as it seems now, not for long."}, {"start": 190.64, "end": 195.0, "text": " An additional challenge is that the game score is not necessarily subject to maximization"}, {"start": 195.0, "end": 199.92, "text": " like in most games, but there is a mapping from the scores into an internal reward, which"}, {"start": 199.92, "end": 204.72, "text": " means that the algorithm has to be able to predict its own progress towards winning."}, {"start": 204.72, "end": 209.56, "text": " And note that even though Quake III and Capture the Flag is an excellent way to demonstrate"}, {"start": 209.56, "end": 214.68, "text": " the capabilities of this algorithm, this architecture can be generalized to other problems."}, {"start": 214.68, "end": 219.36, "text": " I am going to give you a few more tidbits that I have found super interesting, but before,"}, {"start": 219.36, "end": 224.4, "text": " if you are enjoying this episode and would like to pick up some cool perks like early access,"}, {"start": 224.4, "end": 228.6, "text": " ending the topic of future episodes or getting your name listed in the video description"}, {"start": 228.6, "end": 232.20000000000002, "text": " as a key supporter, why not support the show on Patreon."}, {"start": 232.20000000000002, "end": 235.32, "text": " With this, you can also help us make better videos in the future."}, {"start": 235.32, "end": 240.4, "text": " You can find us at patreon.com slash 2 minute papers and we also support Bitcoin and other"}, {"start": 240.4, "end": 241.68, "text": " crypto currencies."}, {"start": 241.68, "end": 244.4, "text": " The addresses are available in the video description."}, {"start": 244.4, "end": 246.68, "text": " And now, onwards to the cool tidbits."}, {"start": 246.68, "end": 253.4, "text": " A human plus agent team has been able to defeat an agent plus agent team 5% of the time,"}, {"start": 253.4, "end": 259.2, "text": " indicating that these AIs are able to coordinate and play together with anyone they are given."}, {"start": 259.2, "end": 261.0, "text": " I get goosebumps from this."}, {"start": 261.0, "end": 262.0, "text": " Love it."}, {"start": 262.0, "end": 266.76, "text": " The reaction time and accuracy of the agents is better than that of humans, but not nearly"}, {"start": 266.76, "end": 268.96, "text": " perfect as many people would think."}, {"start": 268.96, "end": 275.0, "text": " However, they outclass humans even if we artificially reduce their accuracy and reaction times."}, {"start": 275.0, "end": 279.96000000000004, "text": " In another experiment, two agents were paired up against two professional game tester humans"}, {"start": 279.96, "end": 285.96, "text": " who could freely communicate and train against the same agents for 12 hours to see if they"}, {"start": 285.96, "end": 289.4, "text": " can learn their patterns and force them to make mistakes."}, {"start": 289.4, "end": 294.08, "text": " Even with this, humans had only 125% of these games."}, {"start": 294.08, "end": 298.76, "text": " Given the other numbers we have, it is very likely that this unfair advantage made no difference"}, {"start": 298.76, "end": 300.15999999999997, "text": " whatsoever."}, {"start": 300.15999999999997, "end": 301.56, "text": " How about that?"}, {"start": 301.56, "end": 305.52, "text": " If there are any more questions, make sure to have a look at the paper that describes"}, {"start": 305.52, "end": 308.56, "text": " every possible tidbit you can possibly imagine."}, {"start": 308.56, "end": 312.36, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=xHpwLiTieu4
This is How You Hack A Neural Network
The paper "Adversarial Reprogramming of Neural Networks" is available here: https://arxiv.org/abs/1806.11146 Andrej Karpathy's image classifier: https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-1839406/ Captcha image source: https://en.wikipedia.org/wiki/CAPTCHA#/media/File:Captchacat.png Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. This is a mind-boggling new piece of work from scientists at Google Brain on how to hack and reprogram neural networks to make them perform any task we want. A neural network is given by a prescribed number of layers, neurons within these layers, and weights. Or in other words, the list of conditions under which these neurons will fire. By choosing the weights appropriately, we can make the neural network perform a large variety of tasks, for instance, to tell us what an input image depicts, or predict new camera viewpoints when looking at a virtual scene. So this means that by changing the weights of the neural network, we can reprogram it to perform something completely different, for instance, solve a capture for us. That is a really cool feature. This work reveals a new kind of vulnerability by performing this kind of reprogramming of neural networks in an adversarial manner, forcing them to perform tasks that they were originally not intended to do. It can perform new tasks that it has never done before, and these tasks are chosen by the adversary. So how do adversarial attacks work in general? What does this mean? Let's have a look at a classifier. These neural networks are trained on a given already existing dataset. This means that they look at a lot of images of buses, and from these, they learn the most important features that are common across buses. Then, when we give them a new, previously unseen image of a bus, they will now be able to identify whether we are seeing a bus or an ostrich. A good example of an adversarial attack is when we present such a classifier with not an image of a bus, but a bus plus some carefully crafted noise that is barely perceptible that forces the neural network to misclassify it as an ostrich. And in this new work, we are not only interested in forcing the neural network to make a mistake, but we want to make it exactly the kind of mistake we want. That sounds awesome, but also quite nabulous. So let's have a look at an example. Here, we are trying to reprogram an image classifier to count the number of squares in our images. Step number one, we create a mapping between the classifier's original labels to our desired labels. Initially, this network was made to identify animals like sharks, hands, and ostriches. Now, we seek to get this network to count the number of squares in our images, so we make an appropriate mapping between their domain and our domain. And then, we present the neural network with our images. These images are basically noise and blocks, where the goal is to create these in a way that take worse the neurons within the neural network to perform our desired task. The neural network then says tiger shark and ostrich, which, when mapped to our domain, means four and 10 squares respectively, which is exactly the answer we were looking for. Now, as you see, the attack is not subtle at all, but it doesn't need to be. Quoting the paper, the attack does not need to be imperceptible to humans or even subtle in order to be considered a success. Potential consequences of adversarial reprogramming include theft of computational resources from public-facing services and repurposing of AI-driven assistance into spies or spam bots. As you see, it is of paramount importance that we talk about AI safety within the series, and my quest is to make sure that everyone is vigilant that now tools like this exist. Thank you so much for coming along on this journey, and if you're enjoying it, make sure to subscribe and hit the bell icon to never miss a future episode, some of which will be on follow-up papers on this super interesting topic. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.08, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir."}, {"start": 4.08, "end": 9.14, "text": " This is a mind-boggling new piece of work from scientists at Google Brain on how to hack"}, {"start": 9.14, "end": 13.72, "text": " and reprogram neural networks to make them perform any task we want."}, {"start": 13.72, "end": 19.52, "text": " A neural network is given by a prescribed number of layers, neurons within these layers, and weights."}, {"start": 19.52, "end": 23.66, "text": " Or in other words, the list of conditions under which these neurons will fire."}, {"start": 23.66, "end": 29.04, "text": " By choosing the weights appropriately, we can make the neural network perform a large variety of tasks,"}, {"start": 29.04, "end": 36.04, "text": " for instance, to tell us what an input image depicts, or predict new camera viewpoints when looking at a virtual scene."}, {"start": 36.04, "end": 42.92, "text": " So this means that by changing the weights of the neural network, we can reprogram it to perform something completely different,"}, {"start": 42.92, "end": 45.58, "text": " for instance, solve a capture for us."}, {"start": 45.58, "end": 47.599999999999994, "text": " That is a really cool feature."}, {"start": 47.599999999999994, "end": 53.64, "text": " This work reveals a new kind of vulnerability by performing this kind of reprogramming of neural networks"}, {"start": 53.64, "end": 59.68, "text": " in an adversarial manner, forcing them to perform tasks that they were originally not intended to do."}, {"start": 59.68, "end": 65.92, "text": " It can perform new tasks that it has never done before, and these tasks are chosen by the adversary."}, {"start": 65.92, "end": 68.52, "text": " So how do adversarial attacks work in general?"}, {"start": 68.52, "end": 69.92, "text": " What does this mean?"}, {"start": 69.92, "end": 71.6, "text": " Let's have a look at a classifier."}, {"start": 71.6, "end": 75.88, "text": " These neural networks are trained on a given already existing dataset."}, {"start": 75.88, "end": 78.96000000000001, "text": " This means that they look at a lot of images of buses,"}, {"start": 78.96, "end": 84.0, "text": " and from these, they learn the most important features that are common across buses."}, {"start": 84.0, "end": 87.83999999999999, "text": " Then, when we give them a new, previously unseen image of a bus,"}, {"start": 87.83999999999999, "end": 92.36, "text": " they will now be able to identify whether we are seeing a bus or an ostrich."}, {"start": 92.36, "end": 98.47999999999999, "text": " A good example of an adversarial attack is when we present such a classifier with not an image of a bus,"}, {"start": 98.47999999999999, "end": 102.67999999999999, "text": " but a bus plus some carefully crafted noise that is barely perceptible"}, {"start": 102.67999999999999, "end": 106.39999999999999, "text": " that forces the neural network to misclassify it as an ostrich."}, {"start": 106.4, "end": 111.28, "text": " And in this new work, we are not only interested in forcing the neural network to make a mistake,"}, {"start": 111.28, "end": 115.04, "text": " but we want to make it exactly the kind of mistake we want."}, {"start": 115.04, "end": 118.08000000000001, "text": " That sounds awesome, but also quite nabulous."}, {"start": 118.08000000000001, "end": 119.84, "text": " So let's have a look at an example."}, {"start": 119.84, "end": 126.16000000000001, "text": " Here, we are trying to reprogram an image classifier to count the number of squares in our images."}, {"start": 126.16000000000001, "end": 132.44, "text": " Step number one, we create a mapping between the classifier's original labels to our desired labels."}, {"start": 132.44, "end": 138.35999999999999, "text": " Initially, this network was made to identify animals like sharks, hands, and ostriches."}, {"start": 138.35999999999999, "end": 142.68, "text": " Now, we seek to get this network to count the number of squares in our images,"}, {"start": 142.68, "end": 146.96, "text": " so we make an appropriate mapping between their domain and our domain."}, {"start": 146.96, "end": 150.0, "text": " And then, we present the neural network with our images."}, {"start": 150.0, "end": 155.12, "text": " These images are basically noise and blocks, where the goal is to create these in a way"}, {"start": 155.12, "end": 160.52, "text": " that take worse the neurons within the neural network to perform our desired task."}, {"start": 160.52, "end": 164.12, "text": " The neural network then says tiger shark and ostrich,"}, {"start": 164.12, "end": 169.16000000000003, "text": " which, when mapped to our domain, means four and 10 squares respectively,"}, {"start": 169.16000000000003, "end": 171.72, "text": " which is exactly the answer we were looking for."}, {"start": 171.72, "end": 175.88, "text": " Now, as you see, the attack is not subtle at all, but it doesn't need to be."}, {"start": 175.88, "end": 179.96, "text": " Quoting the paper, the attack does not need to be imperceptible to humans"}, {"start": 179.96, "end": 183.0, "text": " or even subtle in order to be considered a success."}, {"start": 183.0, "end": 188.76000000000002, "text": " Potential consequences of adversarial reprogramming include theft of computational resources"}, {"start": 188.76, "end": 195.07999999999998, "text": " from public-facing services and repurposing of AI-driven assistance into spies or spam bots."}, {"start": 195.07999999999998, "end": 200.51999999999998, "text": " As you see, it is of paramount importance that we talk about AI safety within the series,"}, {"start": 200.51999999999998, "end": 205.72, "text": " and my quest is to make sure that everyone is vigilant that now tools like this exist."}, {"start": 205.72, "end": 209.16, "text": " Thank you so much for coming along on this journey, and if you're enjoying it,"}, {"start": 209.16, "end": 213.23999999999998, "text": " make sure to subscribe and hit the bell icon to never miss a future episode,"}, {"start": 213.23999999999998, "end": 217.48, "text": " some of which will be on follow-up papers on this super interesting topic."}, {"start": 217.48, "end": 221.48, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8GUYAVXmhsI
DeepMind's AI Learns The Piano From The Masters of The Past
The paper "The challenge of realistic music generation: modelling raw audio at scale" is available here: https://arxiv.org/abs/1806.10474 https://drive.google.com/drive/folders/1fvS-DU8AcK078-5k6WGudiBn0XSeE0_D Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-1839406/ Score image credit: https://pixabay.com/en/piano-music-score-music-sheet-1655558/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Jolenei Fahir. Today, we will listen to a new AI from Deep Mind that is capable of creating beautiful piano music. Because there are many algorithms that do that, to put things into perspective, let's talk about the two key differentiating factors that set this method apart from previously existing techniques. One, music is typically learned from high-level representations, such as the score or MIDI data. This is a precise representation of what needs to be played, but they don't tell us how to play them. These small nuances are what makes the music come alive, and this is exactly what is missing from most of the synthesis techniques. This new method is able to learn these structures and generates not MIDI signals, but raw audio waveforms. And two, it is better at retaining stylistic consistency. Most previous techniques create music that is consistent on a shorter time scale, but do not take into consideration what was played 30 seconds ago, and therefore they lack the high-level structure that is the hallmark of quality songwriting. However, this new method shows stylistic consistency over long time periods. Let's give it a quick listen and talk about the architecture of this learning algorithm after that, while we listen, I'll show you the composers it has learned from to produce this. I have never heard any AI-generated music before with such articulation, and the harmonies are also absolutely amazing. Truly stunning results. It uses an architecture that goes by the name Autoregressive Discrete Autoencoder. This contains an encoder module that takes a raw audio waveform and compresses it down into an internal representation where the encoder part is responsible for reconstructing the raw audio from this internal representation. Both of them are neural networks. The Autoregressive part means that the algorithm looks at previous time steps in the learned audio signals when producing new notes and is implemented in the encoder module. Essentially, this is what gives the algorithm longer-term memory to remember what it played earlier. As you have seen the dataset the algorithm learned from as the music was playing, I am also really curious how we can exert artistic control over the output by changing the dataset. Essentially, you can likely change what the student learns by changing the text books used to teach them. For now, let's marvel at one more sound sample. This is already incredible and I can only imagine what we will be able to do not 10 years from now, just a year from now. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.08, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Jolenei Fahir."}, {"start": 4.08, "end": 10.72, "text": " Today, we will listen to a new AI from Deep Mind that is capable of creating beautiful piano music."}, {"start": 10.72, "end": 14.76, "text": " Because there are many algorithms that do that, to put things into perspective,"}, {"start": 14.76, "end": 20.84, "text": " let's talk about the two key differentiating factors that set this method apart from previously existing techniques."}, {"start": 20.84, "end": 28.16, "text": " One, music is typically learned from high-level representations, such as the score or MIDI data."}, {"start": 28.16, "end": 33.96, "text": " This is a precise representation of what needs to be played, but they don't tell us how to play them."}, {"start": 33.96, "end": 41.28, "text": " These small nuances are what makes the music come alive, and this is exactly what is missing from most of the synthesis techniques."}, {"start": 41.28, "end": 48.56, "text": " This new method is able to learn these structures and generates not MIDI signals, but raw audio waveforms."}, {"start": 48.56, "end": 52.64, "text": " And two, it is better at retaining stylistic consistency."}, {"start": 52.64, "end": 61.120000000000005, "text": " Most previous techniques create music that is consistent on a shorter time scale, but do not take into consideration what was played 30 seconds ago,"}, {"start": 61.120000000000005, "end": 66.56, "text": " and therefore they lack the high-level structure that is the hallmark of quality songwriting."}, {"start": 66.56, "end": 71.36, "text": " However, this new method shows stylistic consistency over long time periods."}, {"start": 71.36, "end": 76.48, "text": " Let's give it a quick listen and talk about the architecture of this learning algorithm after that,"}, {"start": 76.48, "end": 83.44, "text": " while we listen, I'll show you the composers it has learned from to produce this."}, {"start": 106.64, "end": 125.28, "text": " I have never heard any AI-generated music before with such articulation, and the harmonies are also absolutely amazing."}, {"start": 125.28, "end": 127.12, "text": " Truly stunning results."}, {"start": 127.12, "end": 132.24, "text": " It uses an architecture that goes by the name Autoregressive Discrete Autoencoder."}, {"start": 132.24, "end": 139.36, "text": " This contains an encoder module that takes a raw audio waveform and compresses it down into an internal representation"}, {"start": 139.36, "end": 145.52, "text": " where the encoder part is responsible for reconstructing the raw audio from this internal representation."}, {"start": 145.52, "end": 147.44, "text": " Both of them are neural networks."}, {"start": 147.44, "end": 153.52, "text": " The Autoregressive part means that the algorithm looks at previous time steps in the learned audio signals"}, {"start": 153.52, "end": 157.36, "text": " when producing new notes and is implemented in the encoder module."}, {"start": 157.36, "end": 162.8, "text": " Essentially, this is what gives the algorithm longer-term memory to remember what it played earlier."}, {"start": 162.8, "end": 166.64000000000001, "text": " As you have seen the dataset the algorithm learned from as the music was playing,"}, {"start": 166.64000000000001, "end": 173.04000000000002, "text": " I am also really curious how we can exert artistic control over the output by changing the dataset."}, {"start": 173.04000000000002, "end": 178.72000000000003, "text": " Essentially, you can likely change what the student learns by changing the text books used to teach them."}, {"start": 178.72, "end": 188.72, "text": " For now, let's marvel at one more sound sample."}, {"start": 208.72, "end": 234.16, "text": " This is already incredible and I can only imagine what we will be able to do not 10 years from now,"}, {"start": 234.16, "end": 235.68, "text": " just a year from now."}, {"start": 235.68, "end": 239.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=yEOEqaEgu94
OpenAI + DOTA2: 180 Years of Learning Per Day
The blog post on OpenAI Five is available here: 1. https://blog.openai.com/openai-five/ 2. https://blog.openai.com/openai-five-benchmark/ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Day9's Learning DOTA2 series. Note - sometimes explicit: https://www.youtube.com/watch?v=8AyrC5Ki31c&list=PLgmCLtUkEutILNA9EM0BON6ShoQGZhd3P I also recommend GameLeap's channel: https://www.youtube.com/channel/UCy0-ftAwxMHzZc74OhYT3PA/videos We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: OpenAI Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
And dear fellow scholars, this is two minute papers with Karo Zsolnai-Fehir. You know that I am always excited to tell you about news where AI players manage to beat humans at more and more complex games. Today we are going to talk about Dota 2, which is a multiplayer online battle arena game with a huge cult following and world championship events with a prize pool of over 40 million dollars. This is not just some game, and just to demonstrate how competitive it is and how quickly it is growing, last time we talked about this in two minute papers episode 180, where an AI beat some of the best players of the game in a limited one versus one setting and the prize pool was 20 million dollars back then. This was a huge milestone as this game requires long-term strategic planning, has incomplete information and a high-dimensional continuous action space which is a classical nightmare situation for any AI. One, the next milestone, was to defeat a human team in the full 5 vs 5 game and I promise to report back when there is something new on this project. So here we go. If you look through the forums and our YouTube comments, it is generally believed that this is so complex that it would never ever happen. I would agree that the search space is in this to pendously large and the problem is notoriously difficult, but whoever thinks that this will never be solved has clearly not been watching enough two minute papers. Now you better hold on to your papers right away because this video dropped 10 months ago in August 2017 and since then the AI has played 180 years worth of gameplay against itself every single day. 80% of these games it played against itself and 20% against its past self and even though 5 of these bots are supposed to work together as a team, there is no explicit communication channel between them. And now it is ready to play 5 vs 5 matches. Some limitations still apply but since then the AI was able to get a firm understanding of the importance of team fighting, predicting the outcome of future actions and encounters, ganking or in other words, ambushing unsuspecting opponents and many other important pieces of the game. The May 15th version of the AI was evenly matched against open AI's in house team which is a formidable result and I find it really amusing that these scientists were beaten by their own algorithm. This is however not a world class Dota 2 team and the crazy part is that the next version of the AI was tested three weeks later and it not only beat the in house team easily but also defeated several other teams and a semi professional team as well. As it is often incorrectly said on several forums that these algorithms defeat humans because they can click faster so I will note that these bots perform about 150 to 170 actions per minute which is approximately in line with an intermediate human player and it is also to be noted that Dota 2 is not that sensitive to this metric. More clicking does not really mean more winning here and all. The human players were also able to train with an earlier version of this AI. There will be an upcoming event on July 28th where these bots will challenge a team of top players so stay tuned for some more updates on this. There is no paper yet but I have put a link to a blog post and a full video in the description and it is a gold mine of information and was such a joy to read through. So what do you think? Who will win and is a 5 vs 5 game in Dota 2 more complex than playing Starcraft 2? If you wish to hear more about this please consider helping us tell this story to more people and convert them into fellow scholars by supporting the series through Patreon and as always we also accept Bitcoin, Ethereum and Litecoin the addresses are in the video description and if you are now in the mood to learn some more about Dota 2 I recommend taking a look at Day 9's channel I have put a link to a relevant series in the video description. Highly recommend it. So there you go, a fresh 2 minute paper set episode that is not 2 minutes and it is not about a paper. Yet, love it. Thanks for watching and for your generous support, I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " And dear fellow scholars, this is two minute papers with Karo Zsolnai-Fehir."}, {"start": 4.36, "end": 9.28, "text": " You know that I am always excited to tell you about news where AI players manage to beat"}, {"start": 9.28, "end": 11.88, "text": " humans at more and more complex games."}, {"start": 11.88, "end": 17.16, "text": " Today we are going to talk about Dota 2, which is a multiplayer online battle arena game"}, {"start": 17.16, "end": 23.2, "text": " with a huge cult following and world championship events with a prize pool of over 40 million"}, {"start": 23.2, "end": 24.2, "text": " dollars."}, {"start": 24.2, "end": 29.12, "text": " This is not just some game, and just to demonstrate how competitive it is and how quickly it"}, {"start": 29.12, "end": 34.72, "text": " is growing, last time we talked about this in two minute papers episode 180, where an"}, {"start": 34.72, "end": 39.72, "text": " AI beat some of the best players of the game in a limited one versus one setting and the"}, {"start": 39.72, "end": 42.88, "text": " prize pool was 20 million dollars back then."}, {"start": 42.88, "end": 48.040000000000006, "text": " This was a huge milestone as this game requires long-term strategic planning, has incomplete"}, {"start": 48.040000000000006, "end": 53.400000000000006, "text": " information and a high-dimensional continuous action space which is a classical nightmare"}, {"start": 53.400000000000006, "end": 55.8, "text": " situation for any AI."}, {"start": 55.8, "end": 61.44, "text": " One, the next milestone, was to defeat a human team in the full 5 vs 5 game and I promise"}, {"start": 61.44, "end": 64.72, "text": " to report back when there is something new on this project."}, {"start": 64.72, "end": 66.0, "text": " So here we go."}, {"start": 66.0, "end": 70.08, "text": " If you look through the forums and our YouTube comments, it is generally believed that this"}, {"start": 70.08, "end": 73.44, "text": " is so complex that it would never ever happen."}, {"start": 73.44, "end": 79.0, "text": " I would agree that the search space is in this to pendously large and the problem is notoriously"}, {"start": 79.0, "end": 83.6, "text": " difficult, but whoever thinks that this will never be solved has clearly not been watching"}, {"start": 83.6, "end": 85.47999999999999, "text": " enough two minute papers."}, {"start": 85.48, "end": 90.68, "text": " Now you better hold on to your papers right away because this video dropped 10 months ago"}, {"start": 90.68, "end": 99.64, "text": " in August 2017 and since then the AI has played 180 years worth of gameplay against itself"}, {"start": 99.64, "end": 101.48, "text": " every single day."}, {"start": 101.48, "end": 107.92, "text": " 80% of these games it played against itself and 20% against its past self and even though"}, {"start": 107.92, "end": 113.04, "text": " 5 of these bots are supposed to work together as a team, there is no explicit communication"}, {"start": 113.04, "end": 114.48, "text": " channel between them."}, {"start": 114.48, "end": 118.28, "text": " And now it is ready to play 5 vs 5 matches."}, {"start": 118.28, "end": 123.28, "text": " Some limitations still apply but since then the AI was able to get a firm understanding"}, {"start": 123.28, "end": 128.92000000000002, "text": " of the importance of team fighting, predicting the outcome of future actions and encounters,"}, {"start": 128.92000000000002, "end": 134.44, "text": " ganking or in other words, ambushing unsuspecting opponents and many other important pieces of"}, {"start": 134.44, "end": 135.44, "text": " the game."}, {"start": 135.44, "end": 140.76, "text": " The May 15th version of the AI was evenly matched against open AI's in house team which"}, {"start": 140.76, "end": 145.56, "text": " is a formidable result and I find it really amusing that these scientists were beaten by"}, {"start": 145.56, "end": 147.04, "text": " their own algorithm."}, {"start": 147.04, "end": 152.0, "text": " This is however not a world class Dota 2 team and the crazy part is that the next version"}, {"start": 152.0, "end": 157.79999999999998, "text": " of the AI was tested three weeks later and it not only beat the in house team easily"}, {"start": 157.79999999999998, "end": 163.0, "text": " but also defeated several other teams and a semi professional team as well."}, {"start": 163.0, "end": 167.79999999999998, "text": " As it is often incorrectly said on several forums that these algorithms defeat humans"}, {"start": 167.8, "end": 174.8, "text": " because they can click faster so I will note that these bots perform about 150 to 170"}, {"start": 174.8, "end": 179.60000000000002, "text": " actions per minute which is approximately in line with an intermediate human player"}, {"start": 179.60000000000002, "end": 184.64000000000001, "text": " and it is also to be noted that Dota 2 is not that sensitive to this metric."}, {"start": 184.64000000000001, "end": 188.48000000000002, "text": " More clicking does not really mean more winning here and all."}, {"start": 188.48000000000002, "end": 192.84, "text": " The human players were also able to train with an earlier version of this AI."}, {"start": 192.84, "end": 198.08, "text": " There will be an upcoming event on July 28th where these bots will challenge a team of"}, {"start": 198.08, "end": 201.68, "text": " top players so stay tuned for some more updates on this."}, {"start": 201.68, "end": 206.12, "text": " There is no paper yet but I have put a link to a blog post and a full video in the description"}, {"start": 206.12, "end": 210.72, "text": " and it is a gold mine of information and was such a joy to read through."}, {"start": 210.72, "end": 212.2, "text": " So what do you think?"}, {"start": 212.2, "end": 218.52, "text": " Who will win and is a 5 vs 5 game in Dota 2 more complex than playing Starcraft 2?"}, {"start": 218.52, "end": 222.54, "text": " If you wish to hear more about this please consider helping us tell this story to more"}, {"start": 222.54, "end": 228.0, "text": " people and convert them into fellow scholars by supporting the series through Patreon"}, {"start": 228.0, "end": 233.35999999999999, "text": " and as always we also accept Bitcoin, Ethereum and Litecoin the addresses are in the video"}, {"start": 233.35999999999999, "end": 238.12, "text": " description and if you are now in the mood to learn some more about Dota 2 I recommend"}, {"start": 238.12, "end": 242.35999999999999, "text": " taking a look at Day 9's channel I have put a link to a relevant series in the video"}, {"start": 242.35999999999999, "end": 243.35999999999999, "text": " description."}, {"start": 243.35999999999999, "end": 244.68, "text": " Highly recommend it."}, {"start": 244.68, "end": 248.88, "text": " So there you go, a fresh 2 minute paper set episode that is not 2 minutes and it is"}, {"start": 248.88, "end": 250.28, "text": " not about a paper."}, {"start": 250.28, "end": 252.04, "text": " Yet, love it."}, {"start": 252.04, "end": 255.56, "text": " Thanks for watching and for your generous support, I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2FHHuRTkr_Y
OpenAI's Gaming AI Contest: Results | Two Minute Papers #265
Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ The blog post "Retro Contest: Results" and the corresponding paper is available here: 1. https://blog.openai.com/first-retro-contest-retrospective/ 2. https://arxiv.org/abs/1804.03720 Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-933427/ Music credit: https://opengameart.org/content/nes-shooter-music-5-tracks-3-jingles Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. This is a contest by OpenAI where a bunch of AI's compete to decide who has the best transfer learning capabilities. Transfer learning means that the training and the testing environment differs significantly, therefore only the AI's that learn general concepts prevail and the ones that try to get by with memorizing things will quickly fall. In this experiment, these programs start playing Sonic, the Hedgehog, and are given a bunch of levels to train on. However, like in a good test at school, the levels for the final evaluation are kept secret. So, the goal is that only high quality general algorithms prevail and we can cheat through the program as we don't know what the final exam will entail. We only know that we have to make the most of the training materials to pass. Sonic is a legendary platform game where we have to blaze through levels by avoiding obstacles and traps, often while traveling with the speed of sound. Here you can see the winning submission taking the exam on a previously unseen level. After one minute of training, as expected, the AI started to explore the controls, but is still quite inept and does not make any meaningful progress on the level. After 30 minutes, things look significantly better as the AI now understands the basics of the game. And look here, almost got up there and got it. It is clearly making progress as it collects some coins, defeats enemies, goes through the loop and gets stuck seemingly because it doesn't yet know how being underwater changes how high it can jump. This is quite a bit of a special case, so we are getting there. After only 60 to 120 minutes, it became a competent player and was able to finish this challenging map with only a few mistakes, really impressive transfer learning in just about an hour. Note that the algorithm has never seen this level before. Here you see a really cool visualization of three different AI's progress on the map, where the red dots indicate the movement of the character for earlier episodes and the bluer colors show the progress at later stages of the training. I could spend all day staring at these. Videos are available for many many submissions, some of which even opened up their source code and there are a few high quality write-ups as well, so make sure to have a look. There's gonna be lots of fun to be had there. This competition gives us something that is scientifically interesting, practical and super fun at the same time. What more could you possibly want? Huge thumbs up for the open AI team for organizing this and of course, congratulations to the participants. And now you see that we have a job where we train computers to play video games and we are even paid for it. What a time to be alive. By the way, if you wish to unleash the inner scholarly new, two minute paper shirts are available in many sizes and colors. We have max 2. The links are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.6000000000000005, "end": 10.48, "text": " This is a contest by OpenAI where a bunch of AI's compete to decide who has the best transfer"}, {"start": 10.48, "end": 12.24, "text": " learning capabilities."}, {"start": 12.24, "end": 17.240000000000002, "text": " Transfer learning means that the training and the testing environment differs significantly,"}, {"start": 17.240000000000002, "end": 22.400000000000002, "text": " therefore only the AI's that learn general concepts prevail and the ones that try to get"}, {"start": 22.400000000000002, "end": 25.64, "text": " by with memorizing things will quickly fall."}, {"start": 25.64, "end": 30.36, "text": " In this experiment, these programs start playing Sonic, the Hedgehog, and are given a bunch"}, {"start": 30.36, "end": 31.92, "text": " of levels to train on."}, {"start": 31.92, "end": 38.28, "text": " However, like in a good test at school, the levels for the final evaluation are kept secret."}, {"start": 38.28, "end": 43.96, "text": " So, the goal is that only high quality general algorithms prevail and we can cheat through"}, {"start": 43.96, "end": 47.56, "text": " the program as we don't know what the final exam will entail."}, {"start": 47.56, "end": 51.84, "text": " We only know that we have to make the most of the training materials to pass."}, {"start": 51.84, "end": 57.080000000000005, "text": " Sonic is a legendary platform game where we have to blaze through levels by avoiding obstacles"}, {"start": 57.080000000000005, "end": 61.160000000000004, "text": " and traps, often while traveling with the speed of sound."}, {"start": 61.160000000000004, "end": 65.88000000000001, "text": " Here you can see the winning submission taking the exam on a previously unseen level."}, {"start": 65.88000000000001, "end": 70.68, "text": " After one minute of training, as expected, the AI started to explore the controls, but"}, {"start": 70.68, "end": 75.12, "text": " is still quite inept and does not make any meaningful progress on the level."}, {"start": 75.12, "end": 80.48, "text": " After 30 minutes, things look significantly better as the AI now understands the basics"}, {"start": 80.48, "end": 81.64, "text": " of the game."}, {"start": 81.64, "end": 88.8, "text": " And look here, almost got up there and got it."}, {"start": 88.8, "end": 96.16, "text": " It is clearly making progress as it collects some coins, defeats enemies, goes through"}, {"start": 96.16, "end": 103.12, "text": " the loop and gets stuck seemingly because it doesn't yet know how being underwater changes"}, {"start": 103.12, "end": 104.76, "text": " how high it can jump."}, {"start": 104.76, "end": 108.72, "text": " This is quite a bit of a special case, so we are getting there."}, {"start": 108.72, "end": 115.36, "text": " After only 60 to 120 minutes, it became a competent player and was able to finish this challenging"}, {"start": 115.36, "end": 121.48, "text": " map with only a few mistakes, really impressive transfer learning in just about an hour."}, {"start": 121.48, "end": 140.0, "text": " Note that the algorithm has never seen this level before."}, {"start": 140.0, "end": 144.72, "text": " Here you see a really cool visualization of three different AI's progress on the map,"}, {"start": 144.72, "end": 149.36, "text": " where the red dots indicate the movement of the character for earlier episodes and the"}, {"start": 149.36, "end": 153.4, "text": " bluer colors show the progress at later stages of the training."}, {"start": 153.4, "end": 156.36, "text": " I could spend all day staring at these."}, {"start": 156.36, "end": 161.12, "text": " Videos are available for many many submissions, some of which even opened up their source code"}, {"start": 161.12, "end": 165.4, "text": " and there are a few high quality write-ups as well, so make sure to have a look."}, {"start": 165.4, "end": 167.88000000000002, "text": " There's gonna be lots of fun to be had there."}, {"start": 167.88000000000002, "end": 172.76000000000002, "text": " This competition gives us something that is scientifically interesting, practical and"}, {"start": 172.76000000000002, "end": 175.08, "text": " super fun at the same time."}, {"start": 175.08, "end": 179.56, "text": " What more could you possibly want? Huge thumbs up for the open AI team for organizing"}, {"start": 179.56, "end": 183.08, "text": " this and of course, congratulations to the participants."}, {"start": 183.08, "end": 187.16000000000003, "text": " And now you see that we have a job where we train computers to play video games and we"}, {"start": 187.16000000000003, "end": 189.0, "text": " are even paid for it."}, {"start": 189.0, "end": 190.52, "text": " What a time to be alive."}, {"start": 190.52, "end": 194.88000000000002, "text": " By the way, if you wish to unleash the inner scholarly new, two minute paper shirts are"}, {"start": 194.88000000000002, "end": 197.4, "text": " available in many sizes and colors."}, {"start": 197.4, "end": 198.4, "text": " We have max 2."}, {"start": 198.4, "end": 200.64000000000001, "text": " The links are available in the video description."}, {"start": 200.64, "end": 204.44, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lCoR-4OlIZI
Style Transfer...For Smoke and Fluids! | Two Minute Papers #264
The paper "Example-based Turbulence Style Transfer" is available here: http://nishitalab.org/user/syuhei/TurbuStyleTrans/turbu_styletrans.html Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-984175/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifehir. Fluid and smoke simulations are widely used in computer games and in the movie industry and are capable of creating absolutely stunning video footage. We can very quickly put together a core simulation and run it cheaply, however, the more turbulent motion we are trying to simulate, the more resources and time it will take. If we wish to create some footage with the amount of visual quality that you see here, well, if you think the several hour computation time for light transport algorithms was too much, better hold on to your papers because it will take not hours, but often from days to weeks to compute. And to ease the computation time of such simulations, this is a technique that performs style transfer, but this time not for paintings, but for fluid and smoke simulations. How cool is that? It takes the low resolution source and detailed target footage, dices them up into small patches and boroughs from image and textures synthesis techniques to create a higher resolution version of our input simulation. The challenge of this technique is that we cannot just put more swirly motion on top of our velocity fields because this piece of fluid has to obey to the laws of physics to look natural. Also, we have to make sure that there is not too much variation from patch to patch, so we have to perform some sort of smoothing on the boundaries of these patches. Our smoke plumes also have to interact with obstacles, which is anything but trivial to do well. Have a look at the ground truth results from the high resolution simulation. This is the one that would take a long time to compute. There are clearly deviations, but given how much the input footage was, I'll take this any day of the week. We can now look forward to seeing even higher quality smoke and fluids in the animation movies of the near future. There was a similar technique by the name Wavelet Turbulence, which is one of my all-time favorite papers that has been showcased in the very first two-minute papers episode. This is what it looked like, and we are now celebrating its 10th anniversary. Imagine what a bomb this was 10 years ago, and you know what? It is still going strong. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifehir."}, {"start": 4.32, "end": 8.92, "text": " Fluid and smoke simulations are widely used in computer games and in the movie industry"}, {"start": 8.92, "end": 12.96, "text": " and are capable of creating absolutely stunning video footage."}, {"start": 12.96, "end": 18.36, "text": " We can very quickly put together a core simulation and run it cheaply, however, the more turbulent"}, {"start": 18.36, "end": 22.72, "text": " motion we are trying to simulate, the more resources and time it will take."}, {"start": 22.72, "end": 26.8, "text": " If we wish to create some footage with the amount of visual quality that you see here,"}, {"start": 26.8, "end": 31.64, "text": " well, if you think the several hour computation time for light transport algorithms was too"}, {"start": 31.64, "end": 37.56, "text": " much, better hold on to your papers because it will take not hours, but often from days"}, {"start": 37.56, "end": 39.16, "text": " to weeks to compute."}, {"start": 39.16, "end": 45.040000000000006, "text": " And to ease the computation time of such simulations, this is a technique that performs style transfer,"}, {"start": 45.040000000000006, "end": 49.8, "text": " but this time not for paintings, but for fluid and smoke simulations."}, {"start": 49.8, "end": 51.519999999999996, "text": " How cool is that?"}, {"start": 51.52, "end": 57.36, "text": " It takes the low resolution source and detailed target footage, dices them up into small patches"}, {"start": 57.36, "end": 62.2, "text": " and boroughs from image and textures synthesis techniques to create a higher resolution"}, {"start": 62.2, "end": 64.28, "text": " version of our input simulation."}, {"start": 64.28, "end": 68.68, "text": " The challenge of this technique is that we cannot just put more swirly motion on top of"}, {"start": 68.68, "end": 73.48, "text": " our velocity fields because this piece of fluid has to obey to the laws of physics to"}, {"start": 73.48, "end": 74.48, "text": " look natural."}, {"start": 74.48, "end": 79.08000000000001, "text": " Also, we have to make sure that there is not too much variation from patch to patch, so"}, {"start": 79.08, "end": 83.6, "text": " we have to perform some sort of smoothing on the boundaries of these patches."}, {"start": 83.6, "end": 87.96, "text": " Our smoke plumes also have to interact with obstacles, which is anything but trivial"}, {"start": 87.96, "end": 89.2, "text": " to do well."}, {"start": 89.2, "end": 92.96, "text": " Have a look at the ground truth results from the high resolution simulation."}, {"start": 92.96, "end": 95.96, "text": " This is the one that would take a long time to compute."}, {"start": 95.96, "end": 100.32, "text": " There are clearly deviations, but given how much the input footage was, I'll take this"}, {"start": 100.32, "end": 101.75999999999999, "text": " any day of the week."}, {"start": 101.75999999999999, "end": 106.68, "text": " We can now look forward to seeing even higher quality smoke and fluids in the animation"}, {"start": 106.68, "end": 108.56, "text": " movies of the near future."}, {"start": 108.56, "end": 112.88, "text": " There was a similar technique by the name Wavelet Turbulence, which is one of my all-time"}, {"start": 112.88, "end": 117.88, "text": " favorite papers that has been showcased in the very first two-minute papers episode."}, {"start": 117.88, "end": 122.0, "text": " This is what it looked like, and we are now celebrating its 10th anniversary."}, {"start": 122.0, "end": 125.52000000000001, "text": " Imagine what a bomb this was 10 years ago, and you know what?"}, {"start": 125.52000000000001, "end": 126.92, "text": " It is still going strong."}, {"start": 126.92, "end": 154.04, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=gnctSz2ofU4
DeepMind's AI Learns To See | Two Minute Papers #263
Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh PayPal: https://www.paypal.me/TwoMinutePapers Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg The papers "Neural scene representation and rendering" and "Gaussian Material Synthesis" are available here: 1. https://deepmind.com/documents/211/Neural_Scene_Representation_and_Rendering_preprint.pdf 2. https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-2035427/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. This is a recent DeepMind paper on Neuror rendering, where they taught a learning-based technique to see things the way humans do. What's more, it has an understanding of geometry, viewpoints, shadows, occlusion, even self-shadowing and self-occlusion and many other difficult concepts. So what does this do and how does it work exactly? It contains a representation and a generation network. The representation network takes a bunch of observations, a few screenshots, if you will, and then calls this visual sensory data into a concise description that contains the underlying information in the scene. These observations are made from only a handful of camera positions and viewpoints. The Neuror rendering or seeing part means that we choose a position and viewpoint that the algorithm hasn't seen yet and ask the generation network to create an appropriate image that matches reality. Now we have to hold on to our papers for a moment and understand why this is such a crazy idea. Computer graphics researchers work so hard on creating similar rendering and light simulation programs that take tons of computational power to compute all aspects of light transport and then return give us a beautiful image. If we slightly change the camera angles, we have to redo most of the same computations, whereas the learning-based algorithm may just say, don't worry, I got this. And from previous experience, guesses the remainder of the information perfectly. I love it. And what's more, by leaning on what these two networks learned, it generalizes so well that it can even deal with previously unobserved scenes. If you remember, I have also worked on a Neuror renderer for about 3,000 hours and created an AI that predicts photorealistic images perfectly. The difference was that this one took a fixed camera viewpoint and predicted what the object would look like if we started changing its material properties. I'd love to see a possible combination of these two works. Oh my, super excited for this. There's a link in the video description to both of these works. Can you think of other possible uses for these techniques? Let me know in the comments section. And if you wish to decide the order of future episodes or get your name listed as a key supporter for the series, hop over to our Patreon page and pick up some cool perks. We use these funds to improve the series and empower other research projects and conferences. As this video series is on the cutting edge of technology, of course, we also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. The addresses are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.46, "end": 9.1, "text": " This is a recent DeepMind paper on Neuror rendering, where they taught a learning-based technique"}, {"start": 9.1, "end": 11.98, "text": " to see things the way humans do."}, {"start": 11.98, "end": 18.6, "text": " What's more, it has an understanding of geometry, viewpoints, shadows, occlusion, even self-shadowing"}, {"start": 18.6, "end": 21.900000000000002, "text": " and self-occlusion and many other difficult concepts."}, {"start": 21.900000000000002, "end": 24.96, "text": " So what does this do and how does it work exactly?"}, {"start": 24.96, "end": 28.34, "text": " It contains a representation and a generation network."}, {"start": 28.34, "end": 33.42, "text": " The representation network takes a bunch of observations, a few screenshots, if you will,"}, {"start": 33.42, "end": 38.9, "text": " and then calls this visual sensory data into a concise description that contains the underlying"}, {"start": 38.9, "end": 40.86, "text": " information in the scene."}, {"start": 40.86, "end": 45.82, "text": " These observations are made from only a handful of camera positions and viewpoints."}, {"start": 45.82, "end": 50.620000000000005, "text": " The Neuror rendering or seeing part means that we choose a position and viewpoint that"}, {"start": 50.620000000000005, "end": 55.54, "text": " the algorithm hasn't seen yet and ask the generation network to create an appropriate"}, {"start": 55.54, "end": 57.900000000000006, "text": " image that matches reality."}, {"start": 57.9, "end": 63.019999999999996, "text": " Now we have to hold on to our papers for a moment and understand why this is such a crazy"}, {"start": 63.019999999999996, "end": 64.02, "text": " idea."}, {"start": 64.02, "end": 69.32, "text": " Computer graphics researchers work so hard on creating similar rendering and light simulation"}, {"start": 69.32, "end": 75.14, "text": " programs that take tons of computational power to compute all aspects of light transport"}, {"start": 75.14, "end": 77.94, "text": " and then return give us a beautiful image."}, {"start": 77.94, "end": 82.9, "text": " If we slightly change the camera angles, we have to redo most of the same computations,"}, {"start": 82.9, "end": 87.46000000000001, "text": " whereas the learning-based algorithm may just say, don't worry, I got this."}, {"start": 87.46, "end": 92.17999999999999, "text": " And from previous experience, guesses the remainder of the information perfectly."}, {"start": 92.17999999999999, "end": 93.69999999999999, "text": " I love it."}, {"start": 93.69999999999999, "end": 98.74, "text": " And what's more, by leaning on what these two networks learned, it generalizes so well"}, {"start": 98.74, "end": 102.5, "text": " that it can even deal with previously unobserved scenes."}, {"start": 102.5, "end": 107.61999999999999, "text": " If you remember, I have also worked on a Neuror renderer for about 3,000 hours and created"}, {"start": 107.61999999999999, "end": 111.3, "text": " an AI that predicts photorealistic images perfectly."}, {"start": 111.3, "end": 116.13999999999999, "text": " The difference was that this one took a fixed camera viewpoint and predicted what the object"}, {"start": 116.14, "end": 119.62, "text": " would look like if we started changing its material properties."}, {"start": 119.62, "end": 122.74, "text": " I'd love to see a possible combination of these two works."}, {"start": 122.74, "end": 125.5, "text": " Oh my, super excited for this."}, {"start": 125.5, "end": 128.78, "text": " There's a link in the video description to both of these works."}, {"start": 128.78, "end": 132.06, "text": " Can you think of other possible uses for these techniques?"}, {"start": 132.06, "end": 133.9, "text": " Let me know in the comments section."}, {"start": 133.9, "end": 138.3, "text": " And if you wish to decide the order of future episodes or get your name listed as a"}, {"start": 138.3, "end": 143.34, "text": " key supporter for the series, hop over to our Patreon page and pick up some cool perks."}, {"start": 143.34, "end": 148.54, "text": " We use these funds to improve the series and empower other research projects and conferences."}, {"start": 148.54, "end": 152.94, "text": " As this video series is on the cutting edge of technology, of course, we also support"}, {"start": 152.94, "end": 156.54, "text": " cryptocurrencies like Bitcoin, Ethereum and Litecoin."}, {"start": 156.54, "end": 158.98000000000002, "text": " The addresses are available in the video description."}, {"start": 158.98, "end": 188.54, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=KEdrBMZx53w
Infinite Walking in Virtual Reality | Two Minute Papers #262
The paper "Towards Virtual Reality Infinite Walking: Dynamic Saccadic Redirection " is available here: http://research.nvidia.com/publication/2018-08_Towards-Virtual-Reality Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-2561233/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #vr #metaverse
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. We are over 260 episodes into the series, but believe it or not, we haven't had a single episode on virtual reality. So at this point, you probably know that this paper has to be really good. The promise of virtual reality is indeed truly incredible. Doctors could be trained to perform surgery in a virtual environment, or even perform surgery from afar, we could enhance military training by putting soldiers into better flight simulators, expose astronauts to virtual zero-gravity simulations, you name it, and of course, games. As you see, virtual reality, or VR in short, is on the rise these days, and there is a lot of research going on on how to make more killer applications for it. The basics are simple. We put on a VR headset and walk around in our room and perform gestures, and these will be performed in a virtual world by our avatar. Sounds super fun, right? Well, yes, however, we have this headset on, and we don't really see our surroundings within the room, which makes it easy to bump into objects, or smash the controller into a wall, which is exactly what I did in the MVDLAB in Switzerland not so long ago. My greetings to all the kind people there, and sorry folks. So, what could be a possible solution? Creating virtual worlds with smaller scales, that kind of defeats the purpose, doesn't it? There has to be a better solution. So how about redirection? Redirection is a simple concept that changes our movement in the virtual world, so it deviates from our real path in the room in a way that both lets us explore the virtual world well, and not bump into walls and objects in the meantime. Most existing techniques out there either don't do redirection, and make us bump into objects and walls within our room, or they do redirection at the cost of introducing distortions and other disturbing changes into the virtual environment. This is not easy to perform well because it has to feel natural, but the changes we apply to the path deviates from what is natural. Here you can see how the blue and orange lines deviate, which means that the algorithm is at work. With this, we can wonder about in a huge and majestic virtual landscape, or a cramped bar, even when being confined to a small physical room. Loving the idea. This technique takes into consideration even other moving players in the room and dynamically remap our virtual paths to make sure we don't bump into them. There is a lot more in the paper that describes how the whole method adapts to human perception. Papers like this make me really happy because there are thousands of papers in the domain of human perception within computer graphics, many of which will now see quite a bit of practical use. VR is going to be a huge enabler for this area. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.2, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir."}, {"start": 4.2, "end": 9.4, "text": " We are over 260 episodes into the series, but believe it or not, we haven't had a single"}, {"start": 9.4, "end": 11.52, "text": " episode on virtual reality."}, {"start": 11.52, "end": 15.72, "text": " So at this point, you probably know that this paper has to be really good."}, {"start": 15.72, "end": 19.84, "text": " The promise of virtual reality is indeed truly incredible."}, {"start": 19.84, "end": 24.52, "text": " Doctors could be trained to perform surgery in a virtual environment, or even perform surgery"}, {"start": 24.52, "end": 29.36, "text": " from afar, we could enhance military training by putting soldiers into better flight"}, {"start": 29.36, "end": 35.72, "text": " simulators, expose astronauts to virtual zero-gravity simulations, you name it, and of course,"}, {"start": 35.72, "end": 36.72, "text": " games."}, {"start": 36.72, "end": 41.4, "text": " As you see, virtual reality, or VR in short, is on the rise these days, and there is"}, {"start": 41.4, "end": 45.84, "text": " a lot of research going on on how to make more killer applications for it."}, {"start": 45.84, "end": 47.28, "text": " The basics are simple."}, {"start": 47.28, "end": 52.28, "text": " We put on a VR headset and walk around in our room and perform gestures, and these will"}, {"start": 52.28, "end": 56.16, "text": " be performed in a virtual world by our avatar."}, {"start": 56.16, "end": 57.879999999999995, "text": " Sounds super fun, right?"}, {"start": 57.88, "end": 62.88, "text": " Well, yes, however, we have this headset on, and we don't really see our surroundings"}, {"start": 62.88, "end": 67.68, "text": " within the room, which makes it easy to bump into objects, or smash the controller into"}, {"start": 67.68, "end": 72.84, "text": " a wall, which is exactly what I did in the MVDLAB in Switzerland not so long ago."}, {"start": 72.84, "end": 75.84, "text": " My greetings to all the kind people there, and sorry folks."}, {"start": 75.84, "end": 79.04, "text": " So, what could be a possible solution?"}, {"start": 79.04, "end": 83.64, "text": " Creating virtual worlds with smaller scales, that kind of defeats the purpose, doesn't"}, {"start": 83.64, "end": 84.64, "text": " it?"}, {"start": 84.64, "end": 86.52000000000001, "text": " There has to be a better solution."}, {"start": 86.52, "end": 88.44, "text": " So how about redirection?"}, {"start": 88.44, "end": 93.39999999999999, "text": " Redirection is a simple concept that changes our movement in the virtual world, so it deviates"}, {"start": 93.39999999999999, "end": 99.0, "text": " from our real path in the room in a way that both lets us explore the virtual world well,"}, {"start": 99.0, "end": 102.36, "text": " and not bump into walls and objects in the meantime."}, {"start": 102.36, "end": 106.92, "text": " Most existing techniques out there either don't do redirection, and make us bump into objects"}, {"start": 106.92, "end": 113.19999999999999, "text": " and walls within our room, or they do redirection at the cost of introducing distortions and other"}, {"start": 113.19999999999999, "end": 116.24, "text": " disturbing changes into the virtual environment."}, {"start": 116.24, "end": 121.39999999999999, "text": " This is not easy to perform well because it has to feel natural, but the changes we apply"}, {"start": 121.39999999999999, "end": 124.39999999999999, "text": " to the path deviates from what is natural."}, {"start": 124.39999999999999, "end": 129.07999999999998, "text": " Here you can see how the blue and orange lines deviate, which means that the algorithm"}, {"start": 129.07999999999998, "end": 130.32, "text": " is at work."}, {"start": 130.32, "end": 135.68, "text": " With this, we can wonder about in a huge and majestic virtual landscape, or a cramped"}, {"start": 135.68, "end": 139.88, "text": " bar, even when being confined to a small physical room."}, {"start": 139.88, "end": 141.07999999999998, "text": " Loving the idea."}, {"start": 141.07999999999998, "end": 146.04, "text": " This technique takes into consideration even other moving players in the room and dynamically"}, {"start": 146.04, "end": 149.92, "text": " remap our virtual paths to make sure we don't bump into them."}, {"start": 149.92, "end": 155.2, "text": " There is a lot more in the paper that describes how the whole method adapts to human perception."}, {"start": 155.2, "end": 159.44, "text": " Papers like this make me really happy because there are thousands of papers in the domain"}, {"start": 159.44, "end": 164.56, "text": " of human perception within computer graphics, many of which will now see quite a bit of practical"}, {"start": 164.56, "end": 165.56, "text": " use."}, {"start": 165.56, "end": 168.2, "text": " VR is going to be a huge enabler for this area."}, {"start": 168.2, "end": 178.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=WMr9ljLomUI
An AI For Image Manipulation Detection | Two Minute Papers #261
Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers PayPal and crypto links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg The paper "Learning Rich Features for Image Manipulation Detection" is available here: https://arxiv.org/abs/1805.04953 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1440055/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir. As facial reenactment videos are improving at a rapid pace, it is getting easier and easier to create video impersonations of other people by transferring our gestures onto their faces. We have recently discussed a technique that is able to localize the modified regions within these videos, however, this technique was limited to human facial reenactment. That is great, but what about the more general case with manipulated photos? Well, do not worry for a second because this new learning-based algorithm can look at any image and highlight the regions that were tempered with. It can detect image splicing, which means that we take a part of a different image and add it to this one. Or, copying an object and pasting it to the image elsewhere. Or, removing an object from a photo and filling in the hole with meaningful information harvested from the image. This, we also refer to as image in-painting, and this is something that we also use often to edit our thumbnail images that you see here on YouTube. Believe it or not, it can detect all of these cases. And it uses a two-stream convolution on your own network to accomplish this. So what does this mean exactly? This means a learning algorithm that looks at one, the color data of the image, to try to find unnatural contrast changes along edges and silhouettes, and two, the noise information within the image as well, and see how they relate to each other. Typically, if the image has been tempered with either the noise or the color data is disturbed, or it may be that they look good one by one, but the relation of the two has changed. The algorithm is able to detect these anomalies too. As many of the images we see on the internet are either resized or compressed or both, it is of utmost importance that the algorithm does not look at compression artifacts and thinks that the image has been tempered with. This is something that even humans struggle with on a regular basis, and this is luckily not the case with this algorithm. This is great because smart attackers may try to conceal their mistakes by recompressing an image and thereby adding more artifacts to it. It's not going to fool this algorithm. However, as you follow scholars pointed out in the comments of a previous episode, if we have a neural network that is able to distinguish forged images, with a little modification we can perhaps turn it around and use it as a discriminator to help training a neural network that produces better forgeries. Hmm, what do you think about that? It is of utmost importance that we inform the public that these tools exist. If you wish to hear more about this topic and you think that a bunch of videos like this a month is worth a dollar, please consider supporting us on Patreon. You know the drill, a dollar a month is almost nothing, but it keeps the papers coming. Also, for the price of a coffee, you get exclusive early access to every new episode we release, and there are even more perks on our Patreon page, patreon.com slash two minute papers. We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. The addresses are available in the video description. With your help, we can make better videos in the future. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir."}, {"start": 4.5600000000000005, "end": 9.92, "text": " As facial reenactment videos are improving at a rapid pace, it is getting easier and easier"}, {"start": 9.92, "end": 15.700000000000001, "text": " to create video impersonations of other people by transferring our gestures onto their faces."}, {"start": 15.700000000000001, "end": 19.66, "text": " We have recently discussed a technique that is able to localize the modified regions"}, {"start": 19.66, "end": 25.04, "text": " within these videos, however, this technique was limited to human facial reenactment."}, {"start": 25.04, "end": 29.0, "text": " That is great, but what about the more general case with manipulated photos?"}, {"start": 29.0, "end": 34.22, "text": " Well, do not worry for a second because this new learning-based algorithm can look at any image"}, {"start": 34.22, "end": 36.76, "text": " and highlight the regions that were tempered with."}, {"start": 36.76, "end": 41.36, "text": " It can detect image splicing, which means that we take a part of a different image"}, {"start": 41.36, "end": 43.1, "text": " and add it to this one."}, {"start": 43.1, "end": 47.900000000000006, "text": " Or, copying an object and pasting it to the image elsewhere."}, {"start": 47.900000000000006, "end": 54.64, "text": " Or, removing an object from a photo and filling in the hole with meaningful information harvested from the image."}, {"start": 54.64, "end": 59.86, "text": " This, we also refer to as image in-painting, and this is something that we also use often"}, {"start": 59.86, "end": 62.8, "text": " to edit our thumbnail images that you see here on YouTube."}, {"start": 62.8, "end": 66.04, "text": " Believe it or not, it can detect all of these cases."}, {"start": 66.04, "end": 70.48, "text": " And it uses a two-stream convolution on your own network to accomplish this."}, {"start": 70.48, "end": 72.28, "text": " So what does this mean exactly?"}, {"start": 72.28, "end": 76.78, "text": " This means a learning algorithm that looks at one, the color data of the image,"}, {"start": 76.78, "end": 81.18, "text": " to try to find unnatural contrast changes along edges and silhouettes,"}, {"start": 81.18, "end": 87.04, "text": " and two, the noise information within the image as well, and see how they relate to each other."}, {"start": 87.04, "end": 92.58000000000001, "text": " Typically, if the image has been tempered with either the noise or the color data is disturbed,"}, {"start": 92.58000000000001, "end": 97.88000000000001, "text": " or it may be that they look good one by one, but the relation of the two has changed."}, {"start": 97.88000000000001, "end": 100.84, "text": " The algorithm is able to detect these anomalies too."}, {"start": 100.84, "end": 105.88000000000001, "text": " As many of the images we see on the internet are either resized or compressed or both,"}, {"start": 105.88000000000001, "end": 110.38000000000001, "text": " it is of utmost importance that the algorithm does not look at compression artifacts"}, {"start": 110.38, "end": 112.83999999999999, "text": " and thinks that the image has been tempered with."}, {"start": 112.83999999999999, "end": 116.47999999999999, "text": " This is something that even humans struggle with on a regular basis,"}, {"start": 116.47999999999999, "end": 119.08, "text": " and this is luckily not the case with this algorithm."}, {"start": 119.08, "end": 123.28, "text": " This is great because smart attackers may try to conceal their mistakes"}, {"start": 123.28, "end": 127.17999999999999, "text": " by recompressing an image and thereby adding more artifacts to it."}, {"start": 127.17999999999999, "end": 129.28, "text": " It's not going to fool this algorithm."}, {"start": 129.28, "end": 133.57999999999998, "text": " However, as you follow scholars pointed out in the comments of a previous episode,"}, {"start": 133.57999999999998, "end": 137.38, "text": " if we have a neural network that is able to distinguish forged images,"}, {"start": 137.38, "end": 142.28, "text": " with a little modification we can perhaps turn it around and use it as a discriminator"}, {"start": 142.28, "end": 146.38, "text": " to help training a neural network that produces better forgeries."}, {"start": 146.38, "end": 148.78, "text": " Hmm, what do you think about that?"}, {"start": 148.78, "end": 153.07999999999998, "text": " It is of utmost importance that we inform the public that these tools exist."}, {"start": 153.07999999999998, "end": 158.68, "text": " If you wish to hear more about this topic and you think that a bunch of videos like this a month is worth a dollar,"}, {"start": 158.68, "end": 161.07999999999998, "text": " please consider supporting us on Patreon."}, {"start": 161.07999999999998, "end": 165.68, "text": " You know the drill, a dollar a month is almost nothing, but it keeps the papers coming."}, {"start": 165.68, "end": 171.48000000000002, "text": " Also, for the price of a coffee, you get exclusive early access to every new episode we release,"}, {"start": 171.48000000000002, "end": 176.48000000000002, "text": " and there are even more perks on our Patreon page, patreon.com slash two minute papers."}, {"start": 176.48000000000002, "end": 180.58, "text": " We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin."}, {"start": 180.58, "end": 183.08, "text": " The addresses are available in the video description."}, {"start": 183.08, "end": 185.88, "text": " With your help, we can make better videos in the future."}, {"start": 185.88, "end": 195.88, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YTup-cvELK0
Beautiful Layered Materials, Instantly | Two Minute Papers #260
The paper "Efficient Rendering of Layered Materials using an Atomic Decomposition with Statistical Operators" is available here: https://belcour.github.io/blog/research/2018/05/05/brdf-realtime-layered.html My course on photorealistic rendering at the Technical University of Vienna: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time and crypto payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. As animation movies and computer game graphics become more and more realistic, they draw us more and more into their own worlds. However, when we see cars, max, minerals and paintings and similar materials, we often feel that something is not right there and the illusion quickly crumbles. This is such a peculiar collection of materials, so what is the common denominator between them? Normally, to create these beautiful images, we use programs that create millions of millions of rays of light and simulate how they bounce off of the objects within the scene. However, most of these programs bounce, these rays off of the surface of these objects, where in reality there are many sophisticated multi-layered materials with all kinds of coatings and varnishes. Such a simple surface model is not adequate to model these multiple layers. This new technique is able to simulate not only these surface interactions, but how light is scattered, transferred and absorbed within these layers, enabling us to create even more beautiful images with more sophisticated materials. We can envision new material models with any number of layers and it will be able to handle it. However, I left the best part for last. What is even cooler is that it takes advantage of the regularity of the data and builds a statistical model that approximates what typically happens with our light rays within these layers. What this results in is a real-time technique that still remains accurate. This is not normal. This used to take hours. This is insanity. And the whole paper was written by only one author, Laurent Belcoux, and was accepted to the most prestigious research venue in computer graphics, so huge congress to Laurent for accomplishing this. If you would like to learn more about light transport, I am holding a master-level course on it at the Technical University of Vienna. This course used to take place behind closed doors, but I feel that the teachings shouldn't only be available for the 20-30 people who can afford a university education, but they should be available for everyone. So we recorded the entirety of the course and it is now available for everyone free of charge. If you are interested, have a look at the video description to watch them. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.32, "end": 8.8, "text": " As animation movies and computer game graphics become more and more realistic, they draw"}, {"start": 8.8, "end": 11.120000000000001, "text": " us more and more into their own worlds."}, {"start": 11.120000000000001, "end": 17.28, "text": " However, when we see cars, max, minerals and paintings and similar materials, we often"}, {"start": 17.28, "end": 21.68, "text": " feel that something is not right there and the illusion quickly crumbles."}, {"start": 21.68, "end": 27.080000000000002, "text": " This is such a peculiar collection of materials, so what is the common denominator between them?"}, {"start": 27.08, "end": 31.88, "text": " Normally, to create these beautiful images, we use programs that create millions of millions"}, {"start": 31.88, "end": 36.599999999999994, "text": " of rays of light and simulate how they bounce off of the objects within the scene."}, {"start": 36.599999999999994, "end": 41.56, "text": " However, most of these programs bounce, these rays off of the surface of these objects,"}, {"start": 41.56, "end": 46.56, "text": " where in reality there are many sophisticated multi-layered materials with all kinds of"}, {"start": 46.56, "end": 48.480000000000004, "text": " coatings and varnishes."}, {"start": 48.480000000000004, "end": 53.12, "text": " Such a simple surface model is not adequate to model these multiple layers."}, {"start": 53.12, "end": 57.68, "text": " This new technique is able to simulate not only these surface interactions, but how light"}, {"start": 57.68, "end": 63.08, "text": " is scattered, transferred and absorbed within these layers, enabling us to create even"}, {"start": 63.08, "end": 66.4, "text": " more beautiful images with more sophisticated materials."}, {"start": 66.4, "end": 71.2, "text": " We can envision new material models with any number of layers and it will be able to handle"}, {"start": 71.2, "end": 72.2, "text": " it."}, {"start": 72.2, "end": 74.28, "text": " However, I left the best part for last."}, {"start": 74.28, "end": 79.08, "text": " What is even cooler is that it takes advantage of the regularity of the data and builds a"}, {"start": 79.08, "end": 83.84, "text": " statistical model that approximates what typically happens with our light rays within these"}, {"start": 83.84, "end": 84.84, "text": " layers."}, {"start": 84.84, "end": 89.2, "text": " What this results in is a real-time technique that still remains accurate."}, {"start": 89.2, "end": 90.36, "text": " This is not normal."}, {"start": 90.36, "end": 92.24, "text": " This used to take hours."}, {"start": 92.24, "end": 93.64, "text": " This is insanity."}, {"start": 93.64, "end": 98.16, "text": " And the whole paper was written by only one author, Laurent Belcoux, and was accepted"}, {"start": 98.16, "end": 102.96, "text": " to the most prestigious research venue in computer graphics, so huge congress to Laurent"}, {"start": 102.96, "end": 104.4, "text": " for accomplishing this."}, {"start": 104.4, "end": 108.36, "text": " If you would like to learn more about light transport, I am holding a master-level course"}, {"start": 108.36, "end": 110.92, "text": " on it at the Technical University of Vienna."}, {"start": 110.92, "end": 115.6, "text": " This course used to take place behind closed doors, but I feel that the teachings shouldn't"}, {"start": 115.6, "end": 121.12, "text": " only be available for the 20-30 people who can afford a university education, but they"}, {"start": 121.12, "end": 123.16, "text": " should be available for everyone."}, {"start": 123.16, "end": 127.88, "text": " So we recorded the entirety of the course and it is now available for everyone free of"}, {"start": 127.88, "end": 128.88, "text": " charge."}, {"start": 128.88, "end": 131.68, "text": " If you are interested, have a look at the video description to watch them."}, {"start": 131.68, "end": 138.68, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Te0L5_u_wIg
Faceforensics: This AI Detects DeepFakes!
The paper "FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces " is available here: http://niessnerlab.org/projects/roessler2018faceforensics.html Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-593358/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Faceforensics #Deepfake
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. With a recent ascendancy of several new AI-based techniques for human-facial reenactment, we are now able to create videos where we transfer our gestures onto famous actors or politicians and impersonate them. Clearly, as this only needs a few minutes of video as training data from the target, this could be super useful for animating photorealistic characters for video games and movies, reviving legendary actors who are not with us anymore and much more. And understandably, some are worried about the social implications of such a powerful tool. In other words, if there are tools to create forgery, there should be tools to detect forgery, right? If we can train an AI to impersonate, why not train an other AI to detect impersonation? This has to be an arm stress. However, this is no easy task to say the least. As an example, look here. Some of these faces are real, some are fake. What do you think? Which is which? I will have to admit, my guess is weren't all that great. But what about you? Let me know in the comment section. Compression is also an issue. Since all videos you see here on YouTube are compressed in some way to reduce file size, some of the artifacts that appear may easily throw off not only an AI, but a human as well. I bet there will be many completely authentic videos that will be thought of as fakes by humans in the near future. So how do we solve these problems? First, to obtain a neural network-based solution, we need a large data set to train it on. This paper contains a useful data set with over a thousand videos that we can use to train such a neural network. These records contain pairs of original and manipulated videos along with the input footage of the gestures that were transferred. After the training step, the algorithm will be able to pick up on the smallest changes around the face and tell a forged footage from a real one, even in cases where we humans are unable to do that. This is really amazing. These green-to-red colors showcase regions that the AI things were tampered with. And it is correct. Interestingly, this can not only identify regions that are forgeries, but it can also improve these forgeries too. I wonder if it can detect footage that it has improved itself. What do you think? Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.38, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.38, "end": 9.74, "text": " With a recent ascendancy of several new AI-based techniques for human-facial reenactment, we"}, {"start": 9.74, "end": 15.58, "text": " are now able to create videos where we transfer our gestures onto famous actors or politicians"}, {"start": 15.58, "end": 17.06, "text": " and impersonate them."}, {"start": 17.06, "end": 21.54, "text": " Clearly, as this only needs a few minutes of video as training data from the target, this"}, {"start": 21.54, "end": 26.94, "text": " could be super useful for animating photorealistic characters for video games and movies, reviving"}, {"start": 26.94, "end": 30.82, "text": " legendary actors who are not with us anymore and much more."}, {"start": 30.82, "end": 35.22, "text": " And understandably, some are worried about the social implications of such a powerful"}, {"start": 35.22, "end": 36.02, "text": " tool."}, {"start": 36.02, "end": 41.5, "text": " In other words, if there are tools to create forgery, there should be tools to detect forgery,"}, {"start": 41.5, "end": 42.5, "text": " right?"}, {"start": 42.5, "end": 48.14, "text": " If we can train an AI to impersonate, why not train an other AI to detect impersonation?"}, {"start": 48.14, "end": 49.7, "text": " This has to be an arm stress."}, {"start": 49.7, "end": 52.900000000000006, "text": " However, this is no easy task to say the least."}, {"start": 52.900000000000006, "end": 54.78, "text": " As an example, look here."}, {"start": 54.78, "end": 57.86, "text": " Some of these faces are real, some are fake."}, {"start": 57.86, "end": 58.86, "text": " What do you think?"}, {"start": 58.86, "end": 61.46, "text": " Which is which?"}, {"start": 61.46, "end": 64.7, "text": " I will have to admit, my guess is weren't all that great."}, {"start": 64.7, "end": 65.7, "text": " But what about you?"}, {"start": 65.7, "end": 67.9, "text": " Let me know in the comment section."}, {"start": 67.9, "end": 69.5, "text": " Compression is also an issue."}, {"start": 69.5, "end": 74.42, "text": " Since all videos you see here on YouTube are compressed in some way to reduce file size,"}, {"start": 74.42, "end": 79.82, "text": " some of the artifacts that appear may easily throw off not only an AI, but a human as well."}, {"start": 79.82, "end": 84.69999999999999, "text": " I bet there will be many completely authentic videos that will be thought of as fakes by"}, {"start": 84.69999999999999, "end": 86.41999999999999, "text": " humans in the near future."}, {"start": 86.41999999999999, "end": 88.05999999999999, "text": " So how do we solve these problems?"}, {"start": 88.05999999999999, "end": 92.74, "text": " First, to obtain a neural network-based solution, we need a large data set to train it"}, {"start": 92.74, "end": 93.74, "text": " on."}, {"start": 93.74, "end": 98.5, "text": " This paper contains a useful data set with over a thousand videos that we can use to train"}, {"start": 98.5, "end": 100.13999999999999, "text": " such a neural network."}, {"start": 100.13999999999999, "end": 105.46, "text": " These records contain pairs of original and manipulated videos along with the input footage"}, {"start": 105.46, "end": 107.61999999999999, "text": " of the gestures that were transferred."}, {"start": 107.62, "end": 111.7, "text": " After the training step, the algorithm will be able to pick up on the smallest changes"}, {"start": 111.7, "end": 117.58000000000001, "text": " around the face and tell a forged footage from a real one, even in cases where we humans"}, {"start": 117.58000000000001, "end": 119.30000000000001, "text": " are unable to do that."}, {"start": 119.30000000000001, "end": 121.22, "text": " This is really amazing."}, {"start": 121.22, "end": 126.02000000000001, "text": " These green-to-red colors showcase regions that the AI things were tampered with."}, {"start": 126.02000000000001, "end": 127.42, "text": " And it is correct."}, {"start": 127.42, "end": 132.1, "text": " Interestingly, this can not only identify regions that are forgeries, but it can also"}, {"start": 132.1, "end": 134.06, "text": " improve these forgeries too."}, {"start": 134.06, "end": 138.1, "text": " I wonder if it can detect footage that it has improved itself."}, {"start": 138.1, "end": 139.1, "text": " What do you think?"}, {"start": 139.1, "end": 166.34, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Nq2xvsVojVo
Better Video Impersonations with AI | Two Minute Papers #258
The paper "Deep Video Portraits" is available here: http://gvv.mpi-inf.mpg.de/projects/DeepVideoPortraits/ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers One-time and crypto payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-2119595/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir. Earlier, we talked about an amazing technique where the inputs were a source video of ourselves and a target actor. And the output was a reenactment, in other words, a video of this target actor with our facial gestures. This requires only a few minutes of video from the target, which is usually already available on the internet. Essentially, we can impersonate other people, at least for one video. A key part of this new technique is that it extracts additional data, such as polls and eye positions, both from the source and target videos, and uses this data for the reconstruction. As opposed to the original face-to-face technique from two years ago, which was already mind-blowing, you see here that this results in a new learning-based method that supports the reenactment of eyebrows and blinking, changing the background, plus head and gaze positioning as well. So far, this would still be similar to a non-learning-based technique we've seen a few episodes ago. And now, hold onto your papers, because this algorithm enables us to not only impersonate, but also control the characters in the output video. The results are truly mesmerizing. I almost fell out of the chair when I first seen them. And what's more, we can create really rich reenactments by editing the expressions, polls, and blinking separately by hand. What also needs to be emphasized here is that we see and talk to other human beings all the time, so we have a remarkably key eye for these kinds of gestures. If something is off just by a few millimeters, or is not animated in a way that is close to perfect, the illusion immediately falls apart. And the magical thing about these techniques is that every single iteration, we get something that is way beyond the capabilities of the previous methods, and they come in quick succession. There are plenty of more comparisons in the paper as well, so make sure to have a look. It also contains a great idea that opens up the possibility of creating quantitative evaluations against ground truth footage. Turns out that we can have such a thing as ground truth footage. I wonder when we will see the first movie with this kind of reenactment of an actor who passed away. Do you have some other cool applications in mind? Let me know in the comments section. And if you enjoy this episode, make sure to pick up some cool perks on our Patreon page where you can manage your paper addiction by getting early access to these episodes and more. We also support crypto currencies. The addresses are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir."}, {"start": 4.0, "end": 9.16, "text": " Earlier, we talked about an amazing technique where the inputs were a source video of ourselves"}, {"start": 9.16, "end": 11.08, "text": " and a target actor."}, {"start": 11.08, "end": 16.16, "text": " And the output was a reenactment, in other words, a video of this target actor with our facial"}, {"start": 16.16, "end": 17.16, "text": " gestures."}, {"start": 17.16, "end": 21.84, "text": " This requires only a few minutes of video from the target, which is usually already available"}, {"start": 21.84, "end": 22.84, "text": " on the internet."}, {"start": 22.84, "end": 27.16, "text": " Essentially, we can impersonate other people, at least for one video."}, {"start": 27.16, "end": 32.0, "text": " A key part of this new technique is that it extracts additional data, such as polls"}, {"start": 32.0, "end": 37.08, "text": " and eye positions, both from the source and target videos, and uses this data for the"}, {"start": 37.08, "end": 38.08, "text": " reconstruction."}, {"start": 38.08, "end": 42.32, "text": " As opposed to the original face-to-face technique from two years ago, which was already"}, {"start": 42.32, "end": 47.480000000000004, "text": " mind-blowing, you see here that this results in a new learning-based method that supports"}, {"start": 47.480000000000004, "end": 53.480000000000004, "text": " the reenactment of eyebrows and blinking, changing the background, plus head and gaze positioning"}, {"start": 53.480000000000004, "end": 54.480000000000004, "text": " as well."}, {"start": 54.48, "end": 59.0, "text": " So far, this would still be similar to a non-learning-based technique we've seen a few episodes"}, {"start": 59.0, "end": 60.0, "text": " ago."}, {"start": 60.0, "end": 65.16, "text": " And now, hold onto your papers, because this algorithm enables us to not only impersonate,"}, {"start": 65.16, "end": 68.52, "text": " but also control the characters in the output video."}, {"start": 68.52, "end": 70.75999999999999, "text": " The results are truly mesmerizing."}, {"start": 70.75999999999999, "end": 74.08, "text": " I almost fell out of the chair when I first seen them."}, {"start": 74.08, "end": 79.67999999999999, "text": " And what's more, we can create really rich reenactments by editing the expressions, polls,"}, {"start": 79.67999999999999, "end": 82.24, "text": " and blinking separately by hand."}, {"start": 82.24, "end": 86.83999999999999, "text": " What also needs to be emphasized here is that we see and talk to other human beings all"}, {"start": 86.83999999999999, "end": 91.03999999999999, "text": " the time, so we have a remarkably key eye for these kinds of gestures."}, {"start": 91.03999999999999, "end": 96.08, "text": " If something is off just by a few millimeters, or is not animated in a way that is close"}, {"start": 96.08, "end": 99.75999999999999, "text": " to perfect, the illusion immediately falls apart."}, {"start": 99.75999999999999, "end": 104.19999999999999, "text": " And the magical thing about these techniques is that every single iteration, we get something"}, {"start": 104.19999999999999, "end": 108.52, "text": " that is way beyond the capabilities of the previous methods, and they come in quick"}, {"start": 108.52, "end": 109.52, "text": " succession."}, {"start": 109.52, "end": 113.56, "text": " There are plenty of more comparisons in the paper as well, so make sure to have a look."}, {"start": 113.56, "end": 119.39999999999999, "text": " It also contains a great idea that opens up the possibility of creating quantitative evaluations"}, {"start": 119.39999999999999, "end": 121.24, "text": " against ground truth footage."}, {"start": 121.24, "end": 124.72, "text": " Turns out that we can have such a thing as ground truth footage."}, {"start": 124.72, "end": 129.12, "text": " I wonder when we will see the first movie with this kind of reenactment of an actor who"}, {"start": 129.12, "end": 130.12, "text": " passed away."}, {"start": 130.12, "end": 132.6, "text": " Do you have some other cool applications in mind?"}, {"start": 132.6, "end": 134.48, "text": " Let me know in the comments section."}, {"start": 134.48, "end": 138.84, "text": " And if you enjoy this episode, make sure to pick up some cool perks on our Patreon page"}, {"start": 138.84, "end": 143.12, "text": " where you can manage your paper addiction by getting early access to these episodes"}, {"start": 143.12, "end": 144.12, "text": " and more."}, {"start": 144.12, "end": 145.72, "text": " We also support crypto currencies."}, {"start": 145.72, "end": 148.28, "text": " The addresses are available in the video description."}, {"start": 148.28, "end": 177.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9S2g7iixB9c
Curiosity-Driven AI: How Effective Is It? | Two Minute Papers #257
The paper "Curiosity-driven Exploration by Self-supervised Prediction" and its source code is available here: https://pathak22.github.io/noreward-rl/ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://flic.kr/p/M843Kp Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. There are many research projects about teaching an AI to play video games well. We have seen some amazing results from DeepMind's DeepQ learning algorithm that performed on a superhuman level on many games, but faltered on others. What really made the difference is the sparsity of rewards and the lack of longer-term planning. What this means is that the more often we see the score change on our screen, the faster we know how well we are doing and change our strategy if needed. For instance, if we make a mistake in a Tory breakout, we lose a life almost immediately. But in a strategy game, a bad decision may come back to haunt us up to an hour after committing it. So, what can we do to build an AI that can deal with these cases? So far, we have talked about extrinsic rewards that come from the environment. For instance, our score in a video game and most existing AI's are for all intents and purposes extrinsic score maximizing machines. And this work is about introducing an intrinsic reward by endowing an AI with one of the most human-like attributes, curiosity. But hold on right there, how can a machine possibly become curious? Well, curiosity is defined by whatever mathematical definition we attached to it. In this work, curiosity is defined as the AI's ability to predict the results of its own actions. This is big because it gives the AI tools to preemptively start learning skills that don't seem useful now but might be useful in the future. In short, this AI is driven to explore even if it hasn't been told how well it is doing. It will naturally start exploring levels in Super Mario, even without seeing the score. And now comes the great part. This curiosity really teaches the AI to learn new skills and when we drop it into a new, previously unseen level, it will perform much better than a non-curious one. When playing Doom, the legendary first-person shooter game, it will also start exploring the level and is able to rapidly solve hard exploration tasks. The comparisons reveal that an AI infused with curiosity performs significantly better on easier tasks. But the even cooler part is that with curiosity, we can further increase the difficulty of the games and the sparsity of the external rewards and can expect the agent to do well, even when previous algorithms failed. This will be able to play much harder games than previous works. And remember, games are only used to demonstrate the concept here. This will be able to do so much more. Love it. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.2, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.2, "end": 8.700000000000001, "text": " There are many research projects about teaching an AI to play video games well."}, {"start": 8.700000000000001, "end": 13.0, "text": " We have seen some amazing results from DeepMind's DeepQ learning algorithm"}, {"start": 13.0, "end": 18.0, "text": " that performed on a superhuman level on many games, but faltered on others."}, {"start": 18.0, "end": 23.0, "text": " What really made the difference is the sparsity of rewards and the lack of longer-term planning."}, {"start": 23.0, "end": 27.0, "text": " What this means is that the more often we see the score change on our screen,"}, {"start": 27.0, "end": 31.0, "text": " the faster we know how well we are doing and change our strategy if needed."}, {"start": 31.0, "end": 36.0, "text": " For instance, if we make a mistake in a Tory breakout, we lose a life almost immediately."}, {"start": 36.0, "end": 42.0, "text": " But in a strategy game, a bad decision may come back to haunt us up to an hour after committing it."}, {"start": 42.0, "end": 46.0, "text": " So, what can we do to build an AI that can deal with these cases?"}, {"start": 46.0, "end": 51.0, "text": " So far, we have talked about extrinsic rewards that come from the environment."}, {"start": 51.0, "end": 55.0, "text": " For instance, our score in a video game and most existing AI's are"}, {"start": 55.0, "end": 59.0, "text": " for all intents and purposes extrinsic score maximizing machines."}, {"start": 59.0, "end": 64.0, "text": " And this work is about introducing an intrinsic reward by endowing an AI"}, {"start": 64.0, "end": 68.0, "text": " with one of the most human-like attributes, curiosity."}, {"start": 68.0, "end": 72.0, "text": " But hold on right there, how can a machine possibly become curious?"}, {"start": 72.0, "end": 77.0, "text": " Well, curiosity is defined by whatever mathematical definition we attached to it."}, {"start": 77.0, "end": 83.0, "text": " In this work, curiosity is defined as the AI's ability to predict the results of its own actions."}, {"start": 83.0, "end": 88.0, "text": " This is big because it gives the AI tools to preemptively start learning skills"}, {"start": 88.0, "end": 92.0, "text": " that don't seem useful now but might be useful in the future."}, {"start": 92.0, "end": 97.0, "text": " In short, this AI is driven to explore even if it hasn't been told how well it is doing."}, {"start": 97.0, "end": 103.0, "text": " It will naturally start exploring levels in Super Mario, even without seeing the score."}, {"start": 103.0, "end": 105.0, "text": " And now comes the great part."}, {"start": 105.0, "end": 110.0, "text": " This curiosity really teaches the AI to learn new skills and when we drop it into a new,"}, {"start": 110.0, "end": 115.0, "text": " previously unseen level, it will perform much better than a non-curious one."}, {"start": 115.0, "end": 119.0, "text": " When playing Doom, the legendary first-person shooter game,"}, {"start": 119.0, "end": 124.0, "text": " it will also start exploring the level and is able to rapidly solve hard exploration tasks."}, {"start": 124.0, "end": 132.0, "text": " The comparisons reveal that an AI infused with curiosity performs significantly better on easier tasks."}, {"start": 132.0, "end": 137.0, "text": " But the even cooler part is that with curiosity, we can further increase the difficulty of the games"}, {"start": 137.0, "end": 142.0, "text": " and the sparsity of the external rewards and can expect the agent to do well,"}, {"start": 142.0, "end": 145.0, "text": " even when previous algorithms failed."}, {"start": 145.0, "end": 149.0, "text": " This will be able to play much harder games than previous works."}, {"start": 149.0, "end": 153.0, "text": " And remember, games are only used to demonstrate the concept here."}, {"start": 153.0, "end": 155.0, "text": " This will be able to do so much more."}, {"start": 155.0, "end": 156.0, "text": " Love it."}, {"start": 156.0, "end": 168.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SWW0nVQNm2w
Neural Image Stitching And Morphing | Two Minute Papers #256
The paper "Neural Best-Buddies: Sparse Cross-Domain Correspondence" is available here: https://arxiv.org/abs/1805.04140 Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Rafael Harutyuynyan, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-1580869/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifahir. Consider this problem. We have a pair of images that are visually quite different, but have similar semantic meanings and we wish to map points between them. Now this might sound a bit weird, so bear with me for a moment. For instance, geese and airplanes look quite different, but both have wings and front and back regions. The paw of a lion looks quite different from a cat's foot, but they share the same function and are semantically similar. This is an AI-based technique that is able to find these corresponding points between our pair of images. In fact, the point pairs you've seen so far have been found by this AI. The main difference between this and previous non-learning-based techniques is that instead of pairing up regions based on pixel-color similarities, it measures how similar they are in terms of the neural network's internal representation. This makes all the difference. So far this is pretty cool, but is that it? Apping points? Well, if we can map points effectively, we can map regions as a collection of points. This enables two killer applications. One, this can augment already existing artistic tools so that we can create a hybrid between two images. And the cool thing is that we don't even need to have any drawing skills because we only have to add these colored masks and the algorithm finds and stitches together the corresponding images. And two, it can also perform cross-domain image morphing. That's an amazing term, but what does this mean? This means that we have our pair of images from earlier and we are not interested in stitching together a new image from their parts, but we want an animation where the starting point is one image, the ending point is the other, and we get a smooth and meaningful transition between the two. There are some really cool use cases for this. For example, we can start out from a cartoon drawing, set our photo as an endpoint and witness this beautiful morphing between the two. Kind of like in style transfer, but we have more fine-grained control over the output. Really cool. And note that many images in between are usable as is. No artistic skills needed. And of course, there is a mandatory animation that makes a cat from a dog. As usual, there are lots of comparisons to other similar techniques in the paper. This tool is going to be invaluable for, I was about to say, artists, but this doesn't require any technical expertise, just good taste and a little bit of imagination. What an incredible time to be alive. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifahir."}, {"start": 4.46, "end": 5.74, "text": " Consider this problem."}, {"start": 5.74, "end": 10.24, "text": " We have a pair of images that are visually quite different, but have similar semantic"}, {"start": 10.24, "end": 13.120000000000001, "text": " meanings and we wish to map points between them."}, {"start": 13.120000000000001, "end": 16.38, "text": " Now this might sound a bit weird, so bear with me for a moment."}, {"start": 16.38, "end": 21.28, "text": " For instance, geese and airplanes look quite different, but both have wings and front"}, {"start": 21.28, "end": 22.54, "text": " and back regions."}, {"start": 22.54, "end": 27.32, "text": " The paw of a lion looks quite different from a cat's foot, but they share the same function"}, {"start": 27.32, "end": 29.080000000000002, "text": " and are semantically similar."}, {"start": 29.08, "end": 33.28, "text": " This is an AI-based technique that is able to find these corresponding points between"}, {"start": 33.28, "end": 34.72, "text": " our pair of images."}, {"start": 34.72, "end": 39.16, "text": " In fact, the point pairs you've seen so far have been found by this AI."}, {"start": 39.16, "end": 43.48, "text": " The main difference between this and previous non-learning-based techniques is that instead"}, {"start": 43.48, "end": 48.68, "text": " of pairing up regions based on pixel-color similarities, it measures how similar they"}, {"start": 48.68, "end": 52.959999999999994, "text": " are in terms of the neural network's internal representation."}, {"start": 52.959999999999994, "end": 54.68, "text": " This makes all the difference."}, {"start": 54.68, "end": 57.56, "text": " So far this is pretty cool, but is that it?"}, {"start": 57.56, "end": 59.120000000000005, "text": " Apping points?"}, {"start": 59.120000000000005, "end": 64.04, "text": " Well, if we can map points effectively, we can map regions as a collection of points."}, {"start": 64.04, "end": 66.68, "text": " This enables two killer applications."}, {"start": 66.68, "end": 71.96000000000001, "text": " One, this can augment already existing artistic tools so that we can create a hybrid between"}, {"start": 71.96000000000001, "end": 73.12, "text": " two images."}, {"start": 73.12, "end": 77.28, "text": " And the cool thing is that we don't even need to have any drawing skills because we only"}, {"start": 77.28, "end": 82.56, "text": " have to add these colored masks and the algorithm finds and stitches together the corresponding"}, {"start": 82.56, "end": 85.6, "text": " images."}, {"start": 85.6, "end": 89.39999999999999, "text": " And two, it can also perform cross-domain image morphing."}, {"start": 89.39999999999999, "end": 92.55999999999999, "text": " That's an amazing term, but what does this mean?"}, {"start": 92.55999999999999, "end": 96.44, "text": " This means that we have our pair of images from earlier and we are not interested in"}, {"start": 96.44, "end": 101.28, "text": " stitching together a new image from their parts, but we want an animation where the starting"}, {"start": 101.28, "end": 106.63999999999999, "text": " point is one image, the ending point is the other, and we get a smooth and meaningful transition"}, {"start": 106.63999999999999, "end": 107.63999999999999, "text": " between the two."}, {"start": 107.63999999999999, "end": 110.32, "text": " There are some really cool use cases for this."}, {"start": 110.32, "end": 115.47999999999999, "text": " For example, we can start out from a cartoon drawing, set our photo as an endpoint and"}, {"start": 115.48, "end": 118.56, "text": " witness this beautiful morphing between the two."}, {"start": 118.56, "end": 123.32000000000001, "text": " Kind of like in style transfer, but we have more fine-grained control over the output."}, {"start": 123.32000000000001, "end": 124.32000000000001, "text": " Really cool."}, {"start": 124.32000000000001, "end": 128.28, "text": " And note that many images in between are usable as is."}, {"start": 128.28, "end": 130.12, "text": " No artistic skills needed."}, {"start": 130.12, "end": 134.56, "text": " And of course, there is a mandatory animation that makes a cat from a dog."}, {"start": 134.56, "end": 138.76, "text": " As usual, there are lots of comparisons to other similar techniques in the paper."}, {"start": 138.76, "end": 143.36, "text": " This tool is going to be invaluable for, I was about to say, artists, but this doesn't"}, {"start": 143.36, "end": 148.92000000000002, "text": " require any technical expertise, just good taste and a little bit of imagination."}, {"start": 148.92000000000002, "end": 150.84, "text": " What an incredible time to be alive."}, {"start": 150.84, "end": 180.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=tU484zM3pDY
NVIDIA's AI Removes Objects From Your Photos! ❌
Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg The paper "Image Inpainting for Irregular Holes Using Partial Convolutions" is available here: https://arxiv.org/abs/1804.07723 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
These fellows colors, this is two-minute papers with Kato Ejona Ifehir. Ever had an experience when you shot an almost perfect photograph of, for instance, an amazing landscape, but unfortunately, it was littered with unwanted objects. If only we had an algorithm that could perform image impainting, in other words, delete a small part of an image and have it automatically filled in. So let's have a look at MVDS AI-based solution. On the left, you see the white regions that are given to the algorithm to correct, and on the right, you see the corrected images. So, it works amazingly well, but the question is, why? This is an established research field, so what new can an AI-based approach bring to the table? Well, traditional non-learning approaches either try to fill these holes in with other pixels from the same image that have similar neighborhoods, copy-paste something similar, if you will, or they try to record the distribution of pixel colors, and try to fill in something using that knowledge. And here comes the important part. None of these traditional approaches have an intuitive understanding of the contents of the image, and that is the main value proposition of the neural network-based learning techniques. This work also borrows from earlier artistic style transfer methods to make sure that not only the content, but the style of the impainted regions also match the original image. It is also remarkable that this new method works with images that are devoid of symmetries and can also deal with cases where we cut out really crazy irregularly-shaped holes. Of course, like every good piece of research work, it has to be compared to previous algorithms. As you can see here, the quality of different techniques is measured against a reference output, and it is quite clear that this method produces more convincing results than its competitors. For reference, PatchMatch is a landmark paper from almost 10 years ago that still represents the state of the art for non-learning-based techniques. The paper contains a ton more of these comparisons, so make sure to have a look. Without doubt, this is going to be an invaluable tool for artists in the future. In fact, in this very series, we use Photoshop's built-in image-impaining tool on a daily basis, so this will make our lives much easier. Loving it. Also, did you know that you can get early access to each of these videos? If you are addicted to the series, have a look at our Patreon page, Patreon.com slash 2-minute papers, or just click the link in the video description. There are also other, really cool perks, like getting your name as a key supporter in the video description, or deciding the order of the next few episodes. We also support Cryptocurrencies, the addresses are in the video description, and with this, you also help us make better videos in the future. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.3, "text": " These fellows colors, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.3, "end": 10.3, "text": " Ever had an experience when you shot an almost perfect photograph of, for instance, an amazing landscape,"}, {"start": 10.3, "end": 13.700000000000001, "text": " but unfortunately, it was littered with unwanted objects."}, {"start": 13.700000000000001, "end": 22.0, "text": " If only we had an algorithm that could perform image impainting, in other words, delete a small part of an image and have it automatically filled in."}, {"start": 22.0, "end": 25.0, "text": " So let's have a look at MVDS AI-based solution."}, {"start": 25.0, "end": 31.5, "text": " On the left, you see the white regions that are given to the algorithm to correct, and on the right, you see the corrected images."}, {"start": 31.5, "end": 35.5, "text": " So, it works amazingly well, but the question is, why?"}, {"start": 35.5, "end": 40.5, "text": " This is an established research field, so what new can an AI-based approach bring to the table?"}, {"start": 40.5, "end": 48.5, "text": " Well, traditional non-learning approaches either try to fill these holes in with other pixels from the same image that have similar neighborhoods,"}, {"start": 48.5, "end": 58.0, "text": " copy-paste something similar, if you will, or they try to record the distribution of pixel colors, and try to fill in something using that knowledge."}, {"start": 58.0, "end": 65.0, "text": " And here comes the important part. None of these traditional approaches have an intuitive understanding of the contents of the image,"}, {"start": 65.0, "end": 69.5, "text": " and that is the main value proposition of the neural network-based learning techniques."}, {"start": 69.5, "end": 75.5, "text": " This work also borrows from earlier artistic style transfer methods to make sure that not only the content,"}, {"start": 75.5, "end": 79.5, "text": " but the style of the impainted regions also match the original image."}, {"start": 79.5, "end": 90.5, "text": " It is also remarkable that this new method works with images that are devoid of symmetries and can also deal with cases where we cut out really crazy irregularly-shaped holes."}, {"start": 90.5, "end": 95.5, "text": " Of course, like every good piece of research work, it has to be compared to previous algorithms."}, {"start": 95.5, "end": 105.5, "text": " As you can see here, the quality of different techniques is measured against a reference output, and it is quite clear that this method produces more convincing results than its competitors."}, {"start": 105.5, "end": 113.5, "text": " For reference, PatchMatch is a landmark paper from almost 10 years ago that still represents the state of the art for non-learning-based techniques."}, {"start": 113.5, "end": 117.5, "text": " The paper contains a ton more of these comparisons, so make sure to have a look."}, {"start": 117.5, "end": 122.5, "text": " Without doubt, this is going to be an invaluable tool for artists in the future."}, {"start": 122.5, "end": 130.5, "text": " In fact, in this very series, we use Photoshop's built-in image-impaining tool on a daily basis, so this will make our lives much easier."}, {"start": 130.5, "end": 131.5, "text": " Loving it."}, {"start": 131.5, "end": 135.5, "text": " Also, did you know that you can get early access to each of these videos?"}, {"start": 135.5, "end": 143.5, "text": " If you are addicted to the series, have a look at our Patreon page, Patreon.com slash 2-minute papers, or just click the link in the video description."}, {"start": 143.5, "end": 152.5, "text": " There are also other, really cool perks, like getting your name as a key supporter in the video description, or deciding the order of the next few episodes."}, {"start": 152.5, "end": 159.5, "text": " We also support Cryptocurrencies, the addresses are in the video description, and with this, you also help us make better videos in the future."}, {"start": 159.5, "end": 174.5, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=EQX1wsL2TSs
This Technique Impersonates People | Two Minute Papers #254
The paper "HeadOn: Real-time Reenactment of Human Portrait Videos" is available here: http://niessnerlab.org/projects/thies2018headon.html More on Apple's Memoji: https://www.youtube.com/watch?v=CjqERCCD4iM Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-1867320/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Deepfake
Dear Fellow Scholars, this is two-minute papers with Kato Jonaifahir. Two years ago, in 2016, we talked about a paper that enabled us to sit in and transfer our gestures onto a virtual actor. This work went by the name Face to Face and showcased a bunch of mesmerizing results containing reenactments of famous political figureheads. It was quite amazing, but it is nothing compared to this one. And the reason for this is that the original Face to Face paper only transferred expressions, but this new work is capable of transferring head and torso movements as well. Not only that, but mouth interiors also appear more realistic and more gaze directions are also supported. You see in the comparison here that the original method disregarded many of these features and how much more convincing this new one is. This extended technique opens up the door to several really cool new applications. For instance, consider this self reenactment application. This means that you can reenact yourself. Now, what would that be useful for you may ask? Well, of course, you can appear to be the most professional person during a virtual meeting even when sitting at home in your undergarment. Or you can quickly switch teams based on who is winning the game. Avatar digitization is also possible. This basically means that we can create a stylized version of our likeness to be used in a video game. Somewhat similar to the Mimoji presented in Apple's latest keynote with the iPhone X. And the entire process takes place in real time without using neural networks. This is as good as it gets. What a time to be alive. Of course, like every other technique, this also has its own set of limitations. For instance, illumination changes in the environment are not always taken into account and long-haired subjects with extreme motion may cause artifacts to appear. In short, don't use this for rock concerts. And with this, we are also one step closer to full-characterial enactment for movies, video games, and telepresence applications. This is still a new piece of technology and may offer many more applications that we haven't thought of yet. After all, when the internet was invented, who thought that it could be used to order pizza or transfer bitcoin? Or order pizza and pay with bitcoin? Anyway, if you have some more applications in mind, let me know in the comments section. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.04, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Jonaifahir."}, {"start": 4.04, "end": 9.52, "text": " Two years ago, in 2016, we talked about a paper that enabled us to sit in and transfer"}, {"start": 9.52, "end": 12.08, "text": " our gestures onto a virtual actor."}, {"start": 12.08, "end": 17.48, "text": " This work went by the name Face to Face and showcased a bunch of mesmerizing results containing"}, {"start": 17.48, "end": 20.64, "text": " reenactments of famous political figureheads."}, {"start": 20.64, "end": 24.52, "text": " It was quite amazing, but it is nothing compared to this one."}, {"start": 24.52, "end": 29.48, "text": " And the reason for this is that the original Face to Face paper only transferred expressions,"}, {"start": 29.48, "end": 33.96, "text": " but this new work is capable of transferring head and torso movements as well."}, {"start": 33.96, "end": 39.120000000000005, "text": " Not only that, but mouth interiors also appear more realistic and more gaze directions"}, {"start": 39.120000000000005, "end": 40.6, "text": " are also supported."}, {"start": 40.6, "end": 45.120000000000005, "text": " You see in the comparison here that the original method disregarded many of these features"}, {"start": 45.120000000000005, "end": 47.84, "text": " and how much more convincing this new one is."}, {"start": 47.84, "end": 52.8, "text": " This extended technique opens up the door to several really cool new applications."}, {"start": 52.8, "end": 56.120000000000005, "text": " For instance, consider this self reenactment application."}, {"start": 56.120000000000005, "end": 58.56, "text": " This means that you can reenact yourself."}, {"start": 58.56, "end": 61.64, "text": " Now, what would that be useful for you may ask?"}, {"start": 61.64, "end": 66.84, "text": " Well, of course, you can appear to be the most professional person during a virtual meeting"}, {"start": 66.84, "end": 69.92, "text": " even when sitting at home in your undergarment."}, {"start": 69.92, "end": 73.68, "text": " Or you can quickly switch teams based on who is winning the game."}, {"start": 73.68, "end": 76.36, "text": " Avatar digitization is also possible."}, {"start": 76.36, "end": 80.92, "text": " This basically means that we can create a stylized version of our likeness to be used in"}, {"start": 80.92, "end": 81.76, "text": " a video game."}, {"start": 81.76, "end": 87.0, "text": " Somewhat similar to the Mimoji presented in Apple's latest keynote with the iPhone X."}, {"start": 87.0, "end": 92.12, "text": " And the entire process takes place in real time without using neural networks."}, {"start": 92.12, "end": 93.96, "text": " This is as good as it gets."}, {"start": 93.96, "end": 95.44, "text": " What a time to be alive."}, {"start": 95.44, "end": 99.84, "text": " Of course, like every other technique, this also has its own set of limitations."}, {"start": 99.84, "end": 104.56, "text": " For instance, illumination changes in the environment are not always taken into account"}, {"start": 104.56, "end": 109.68, "text": " and long-haired subjects with extreme motion may cause artifacts to appear."}, {"start": 109.68, "end": 112.2, "text": " In short, don't use this for rock concerts."}, {"start": 112.2, "end": 117.2, "text": " And with this, we are also one step closer to full-characterial enactment for movies, video"}, {"start": 117.2, "end": 119.64, "text": " games, and telepresence applications."}, {"start": 119.64, "end": 124.04, "text": " This is still a new piece of technology and may offer many more applications that we haven't"}, {"start": 124.04, "end": 125.24000000000001, "text": " thought of yet."}, {"start": 125.24000000000001, "end": 129.0, "text": " After all, when the internet was invented, who thought that it could be used to order"}, {"start": 129.0, "end": 131.4, "text": " pizza or transfer bitcoin?"}, {"start": 131.4, "end": 133.84, "text": " Or order pizza and pay with bitcoin?"}, {"start": 133.84, "end": 138.0, "text": " Anyway, if you have some more applications in mind, let me know in the comments section."}, {"start": 138.0, "end": 141.76, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bcZFQ3f26pA
This AI Learned To See In The Dark! 👀
The paper "Learning to See in the Dark" and its source code is available here: http://cchen156.web.engr.illinois.edu/paper/18CVPR_SID.pdf https://github.com/cchen156/Learning-to-See-in-the-Dark Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-2618462/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #NightSight #NightMode
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. If you start watching reviews of some of the more recent smartphones, you will almost always see a dedicated section to low light photography. The result is almost always that cameras that work remarkably well in well-lit scenes produce almost unusable results in the environments. So unless we have access to a super-expensive camera, what can we really do to obtain more usable low light images? Well, of course we could try brightening the image up by increasing the exposure. This would help maybe a tiny bit but would also mess up our wide balance and also amplify the noise within the image. I hope that by now you are getting the feeling that there must be a better AIB solution. Let's have a look. This is an image of a dark indoor environment I am sure you have noticed. This was taken with a relatively high light sensitivity that can be achieved with a consumer camera. This footage is unusable and this image was taken by an expensive camera with extremely high light sensitivity settings. This footage is kind of usable but is quite dim and is highly contaminated by noise. And now hold on to your papers because this AIB technique takes sensor data from the first unusable image and produces this. Holy smokes. And you know what the best part is? It produced this output image in less than a second. Let's have a look at some more results. These look almost too good to be true but luckily we have a paper at our disposal so we can have a look at some of the details of the technique. It reveals that we have to use a convolutional neural network to learn the concept of this kind of image translation but that also means that we require some training data. The input should contain a bunch of dark images. These are the before images. This can hardly be a problem but the outputs should always be the corresponding image with better visibility. These are the after images. So how do we obtain them? The key idea is to use different exposure times for the input and output images. A short exposure time means that when taking a photograph the camera aperture is only open for a short amount of time. This means that less light is let in therefore the photo will be darker. This is perfect for the input images as these will be the ones to be improved and the improved versions are going to be the images with a much longer exposure time. This is because more light is let in and will get brighter and clearer images. This is exactly what we are looking for. So now that we have the before and after images that we refer to as input and output we can start training the network to learn how to perform low light photography well. And as you see here the results are remarkable. Machine learning research at its finest. I really hope we get a software implementation of something like this in the smartphones of the near future that would be quite amazing. And as we have only scratched the surface please make sure to look at the paper as it contains a lot more details. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.28, "end": 8.52, "text": " If you start watching reviews of some of the more recent smartphones, you will almost"}, {"start": 8.52, "end": 12.0, "text": " always see a dedicated section to low light photography."}, {"start": 12.0, "end": 17.76, "text": " The result is almost always that cameras that work remarkably well in well-lit scenes produce"}, {"start": 17.76, "end": 20.92, "text": " almost unusable results in the environments."}, {"start": 20.92, "end": 25.96, "text": " So unless we have access to a super-expensive camera, what can we really do to obtain"}, {"start": 25.96, "end": 28.04, "text": " more usable low light images?"}, {"start": 28.04, "end": 32.879999999999995, "text": " Well, of course we could try brightening the image up by increasing the exposure."}, {"start": 32.879999999999995, "end": 38.4, "text": " This would help maybe a tiny bit but would also mess up our wide balance and also amplify"}, {"start": 38.4, "end": 40.08, "text": " the noise within the image."}, {"start": 40.08, "end": 45.599999999999994, "text": " I hope that by now you are getting the feeling that there must be a better AIB solution."}, {"start": 45.599999999999994, "end": 46.84, "text": " Let's have a look."}, {"start": 46.84, "end": 51.16, "text": " This is an image of a dark indoor environment I am sure you have noticed."}, {"start": 51.16, "end": 55.76, "text": " This was taken with a relatively high light sensitivity that can be achieved with a consumer"}, {"start": 55.76, "end": 56.76, "text": " camera."}, {"start": 56.76, "end": 62.08, "text": " This footage is unusable and this image was taken by an expensive camera with extremely"}, {"start": 62.08, "end": 64.2, "text": " high light sensitivity settings."}, {"start": 64.2, "end": 69.8, "text": " This footage is kind of usable but is quite dim and is highly contaminated by noise."}, {"start": 69.8, "end": 75.68, "text": " And now hold on to your papers because this AIB technique takes sensor data from the first"}, {"start": 75.68, "end": 78.96, "text": " unusable image and produces this."}, {"start": 78.96, "end": 80.47999999999999, "text": " Holy smokes."}, {"start": 80.47999999999999, "end": 82.08, "text": " And you know what the best part is?"}, {"start": 82.08, "end": 85.68, "text": " It produced this output image in less than a second."}, {"start": 85.68, "end": 88.24000000000001, "text": " Let's have a look at some more results."}, {"start": 88.24000000000001, "end": 93.52000000000001, "text": " These look almost too good to be true but luckily we have a paper at our disposal so we can"}, {"start": 93.52000000000001, "end": 96.16000000000001, "text": " have a look at some of the details of the technique."}, {"start": 96.16000000000001, "end": 100.80000000000001, "text": " It reveals that we have to use a convolutional neural network to learn the concept of this"}, {"start": 100.80000000000001, "end": 105.84, "text": " kind of image translation but that also means that we require some training data."}, {"start": 105.84, "end": 108.68, "text": " The input should contain a bunch of dark images."}, {"start": 108.68, "end": 110.28, "text": " These are the before images."}, {"start": 110.28, "end": 115.04, "text": " This can hardly be a problem but the outputs should always be the corresponding image with"}, {"start": 115.04, "end": 116.48, "text": " better visibility."}, {"start": 116.48, "end": 118.52000000000001, "text": " These are the after images."}, {"start": 118.52000000000001, "end": 119.92, "text": " So how do we obtain them?"}, {"start": 119.92, "end": 124.72, "text": " The key idea is to use different exposure times for the input and output images."}, {"start": 124.72, "end": 129.48000000000002, "text": " A short exposure time means that when taking a photograph the camera aperture is only"}, {"start": 129.48000000000002, "end": 131.64000000000001, "text": " open for a short amount of time."}, {"start": 131.64000000000001, "end": 135.84, "text": " This means that less light is let in therefore the photo will be darker."}, {"start": 135.84, "end": 140.4, "text": " This is perfect for the input images as these will be the ones to be improved and the improved"}, {"start": 140.4, "end": 144.48000000000002, "text": " versions are going to be the images with a much longer exposure time."}, {"start": 144.48, "end": 148.95999999999998, "text": " This is because more light is let in and will get brighter and clearer images."}, {"start": 148.95999999999998, "end": 151.12, "text": " This is exactly what we are looking for."}, {"start": 151.12, "end": 156.23999999999998, "text": " So now that we have the before and after images that we refer to as input and output we can"}, {"start": 156.23999999999998, "end": 161.04, "text": " start training the network to learn how to perform low light photography well."}, {"start": 161.04, "end": 163.95999999999998, "text": " And as you see here the results are remarkable."}, {"start": 163.95999999999998, "end": 165.95999999999998, "text": " Machine learning research at its finest."}, {"start": 165.95999999999998, "end": 170.12, "text": " I really hope we get a software implementation of something like this in the smartphones of"}, {"start": 170.12, "end": 172.72, "text": " the near future that would be quite amazing."}, {"start": 172.72, "end": 176.28, "text": " And as we have only scratched the surface please make sure to look at the paper as it"}, {"start": 176.28, "end": 178.04, "text": " contains a lot more details."}, {"start": 178.04, "end": 205.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=KL6U6iasUxs
AI-Based Large-Scale Texture Synthesis | Two Minute Papers #252
The paper "Non-stationary Texture Synthesis by Adversarial Expansion" and its source code is available here: http://vcc.szu.edu.cn/research/2018/TexSyn https://github.com/jessemelpolio/non-stationary_texture_syn Errata: please note that the image at the start of the video is of a wrong paper. Apologies! Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers One-time payment links and crypto addresses are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-3013486/ Texture tiling image credit: https://commons.wikimedia.org/wiki/File:In-game-view-doom.png Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. When an artist is in the process of creating digital media, such as populating a virtual world for an animation movie or a video game, or even in graphic design, the artist often requires a large number of textures for these kinds of works. Concrete walls, leaves, fabrics are materials that we know well from the real world, and sometimes the process of obtaining textures is as simple as paying for a texture package and using it. But the problem quite often occurs that we wish to fill an entire road with a concrete texture, but we only have a small patch at our disposal. In this case, the easiest and worst solution is to copy-paste this texture over and over, creating really unpleasant results that are quite repetitive and suffer from seams. So what about an AI-based technique that looks at a small patch and automatically continues it in a way that looks natural and seamless? This is an area within computer graphics and AI that we call texture synthesis. Periodic texture synthesis is simple, but textures with structure are super difficult. The selling point of this particular work is that it is highly efficient at taking into consideration the content and symmetries of the image. For instance, it knows that it has to take into consideration the concentric nature of the wood rings when synthesizing this texture, and it can also adapt to the regularities of this water texture and create a beautiful, high-resolution result. This is a neural network-based technique, so first, the question is, what should the training data be? Let's take a database of high-resolution images. Let's cut out a small part and pretend that we don't have access to the bigger image and ask a neural network to try to expand this small cutout. This sounds a little silly, so what is this trickery good for? Well, this is super useful because after the neural network has expanded the results, we now have a reference result in our hands that we can compare to, and this way, teach the network to do better. Note that this architecture is a generative adversarial network where two neural networks battle each other. The generator network is the creator that expands the small texture snippets and the discriminator network takes a look and tries to tell it from the real deal. Over time, the generator network learns to be better at texture synthesis and the discriminator network becomes better at telling synthesized results from real ones. Over time, this rivalry leads to results that are of extremely high quality. And as you can see in this comparison, this new technique smokes the competition. The paper contains a ton of more results and comparisons, and one of the most exhaustive evaluation sections I've seen in texture synthesis so far. I highly recommend reading it. If you would like to see more episodes like this, make sure to pick up one of the cool perks we offer through Patreon, such as deciding the order of future episodes or getting your name in the video description of every episode as a key supporter. We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. We had a few really generous pledges in the last few weeks. I am quite stunned to be honest and I regret that I cannot come in contact with these fellow scholars. If you can contact me, that would be great. If not, thank you so much everyone for your unwavering support. This is just incredible. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.5200000000000005, "end": 9.16, "text": " When an artist is in the process of creating digital media, such as populating a virtual"}, {"start": 9.16, "end": 14.88, "text": " world for an animation movie or a video game, or even in graphic design, the artist often"}, {"start": 14.88, "end": 18.88, "text": " requires a large number of textures for these kinds of works."}, {"start": 18.88, "end": 23.78, "text": " Concrete walls, leaves, fabrics are materials that we know well from the real world, and"}, {"start": 23.78, "end": 28.96, "text": " sometimes the process of obtaining textures is as simple as paying for a texture package"}, {"start": 28.96, "end": 30.12, "text": " and using it."}, {"start": 30.12, "end": 34.82, "text": " But the problem quite often occurs that we wish to fill an entire road with a concrete"}, {"start": 34.82, "end": 38.480000000000004, "text": " texture, but we only have a small patch at our disposal."}, {"start": 38.480000000000004, "end": 43.64, "text": " In this case, the easiest and worst solution is to copy-paste this texture over and over,"}, {"start": 43.64, "end": 48.44, "text": " creating really unpleasant results that are quite repetitive and suffer from seams."}, {"start": 48.44, "end": 53.52, "text": " So what about an AI-based technique that looks at a small patch and automatically continues"}, {"start": 53.52, "end": 57.120000000000005, "text": " it in a way that looks natural and seamless?"}, {"start": 57.12, "end": 62.28, "text": " This is an area within computer graphics and AI that we call texture synthesis."}, {"start": 62.28, "end": 67.16, "text": " Periodic texture synthesis is simple, but textures with structure are super difficult."}, {"start": 67.16, "end": 71.36, "text": " The selling point of this particular work is that it is highly efficient at taking into"}, {"start": 71.36, "end": 74.96, "text": " consideration the content and symmetries of the image."}, {"start": 74.96, "end": 79.4, "text": " For instance, it knows that it has to take into consideration the concentric nature of"}, {"start": 79.4, "end": 84.36, "text": " the wood rings when synthesizing this texture, and it can also adapt to the regularities"}, {"start": 84.36, "end": 89.0, "text": " of this water texture and create a beautiful, high-resolution result."}, {"start": 89.0, "end": 92.88, "text": " This is a neural network-based technique, so first, the question is, what should the"}, {"start": 92.88, "end": 94.44, "text": " training data be?"}, {"start": 94.44, "end": 97.44, "text": " Let's take a database of high-resolution images."}, {"start": 97.44, "end": 102.2, "text": " Let's cut out a small part and pretend that we don't have access to the bigger image"}, {"start": 102.2, "end": 106.64, "text": " and ask a neural network to try to expand this small cutout."}, {"start": 106.64, "end": 110.0, "text": " This sounds a little silly, so what is this trickery good for?"}, {"start": 110.0, "end": 114.88, "text": " Well, this is super useful because after the neural network has expanded the results,"}, {"start": 114.88, "end": 119.56, "text": " we now have a reference result in our hands that we can compare to, and this way, teach"}, {"start": 119.56, "end": 121.4, "text": " the network to do better."}, {"start": 121.4, "end": 126.03999999999999, "text": " Note that this architecture is a generative adversarial network where two neural networks"}, {"start": 126.03999999999999, "end": 127.36, "text": " battle each other."}, {"start": 127.36, "end": 133.0, "text": " The generator network is the creator that expands the small texture snippets and the discriminator"}, {"start": 133.0, "end": 136.24, "text": " network takes a look and tries to tell it from the real deal."}, {"start": 136.24, "end": 141.56, "text": " Over time, the generator network learns to be better at texture synthesis and the discriminator"}, {"start": 141.56, "end": 145.8, "text": " network becomes better at telling synthesized results from real ones."}, {"start": 145.8, "end": 150.52, "text": " Over time, this rivalry leads to results that are of extremely high quality."}, {"start": 150.52, "end": 155.0, "text": " And as you can see in this comparison, this new technique smokes the competition."}, {"start": 155.0, "end": 159.48000000000002, "text": " The paper contains a ton of more results and comparisons, and one of the most exhaustive"}, {"start": 159.48000000000002, "end": 163.08, "text": " evaluation sections I've seen in texture synthesis so far."}, {"start": 163.08, "end": 164.96, "text": " I highly recommend reading it."}, {"start": 164.96, "end": 168.68, "text": " If you would like to see more episodes like this, make sure to pick up one of the cool"}, {"start": 168.68, "end": 174.28, "text": " perks we offer through Patreon, such as deciding the order of future episodes or getting"}, {"start": 174.28, "end": 178.52, "text": " your name in the video description of every episode as a key supporter."}, {"start": 178.52, "end": 182.72, "text": " We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin."}, {"start": 182.72, "end": 186.04000000000002, "text": " We had a few really generous pledges in the last few weeks."}, {"start": 186.04000000000002, "end": 190.36, "text": " I am quite stunned to be honest and I regret that I cannot come in contact with these"}, {"start": 190.36, "end": 191.60000000000002, "text": " fellow scholars."}, {"start": 191.60000000000002, "end": 193.8, "text": " If you can contact me, that would be great."}, {"start": 193.8, "end": 197.56, "text": " If not, thank you so much everyone for your unwavering support."}, {"start": 197.56, "end": 198.88000000000002, "text": " This is just incredible."}, {"start": 198.88, "end": 226.51999999999998, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cnquEovq1I4
We Taught an AI To Synthesize Materials 🔮
The paper "Gaussian Material Synthesis" and its source code is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Credits: We would like to thank Robin Marin for the material test scene and Vlad Miller for his help with geometry modeling. Scene and geometry credits: Gold Bars – JohnsonMartin, Christmas Ornaments – oenvoyage, Banana – sgamusse, Bowl – metalix, Grapes – PickleJones, Glass Fruits – BobReed64, Ice cream – b2przemo, Vases – Technausea, Break Time – Jay–Artist, Wrecking Ball – floydkids, Italian Still Life – aXel, Microplanet – marekv, Microplanet vegetation – macio. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #neuralrendering
Dear Fellow Scholars, this is Two Minute Papers with Kato Ejolene Ifehir. Due to popular requests, here is a more intuitive explanation of our latest work. Believe it or not, when I have started working on this, Two Minute Papers didn't even exist. In several research areas, there are cases where we can't talk about our work until it is published. I knew that the paper would not see the light of the day for quite a while, if ever, so I started Two Minute Papers to be able to keep my sanity and deliver a hopefully nice piece of work on a regular basis. In the end, this took more than 3,000 work hours to complete, but it is finally here, and I am so happy to finally be able to present it to you. This work is in the intersection of computer graphics and AI, which you know is among my favorites. So what do we see here? This beautiful scene contains more than 100 different materials, each of which has been learned and synthesized by an AI. None of these days is and then the lions are alike, each of them have a different material model. The goal is to teach an AI the concept of material models, such as metals, minerals, and translucent materials. Traditionally, when we are looking to create a new material model with a light simulation program, we have to fiddle with quite a few parameters, and whenever we change something, we have to wait from 40 to 60 seconds until a noise-free result appears. In our solution, we don't need to play with these parameters. Instead, our goal is to grab a gallery of random materials, assign a score to each of them, saying that I like this one, I didn't like that one, and get an AI to learn our preferences and recommend new materials for us. This is quite useful when we are looking to synthesize not only one, but many materials. So this is learning algorithm number one, and it works really well for a variety of materials. However, these recommendations still have to be rendered with the light simulation program, which takes several hours for a gallery like the one you see here. Here comes learning algorithm number two to the rescue, the neural network that replaces the light simulation program and creates photorealistic visualizations. It is so fast, it not only does this in real time, but it is more than 10 times faster than real time. We call this a neural renderer. So we have a lot of material recommendations, and they are all photorealistic that we can visualize in real time. However, it is always a possibility that we have a recommendation that is almost exactly what we had in mind, but need a few adjustments. That's an issue, because to do that, we would have to go back to the parameter fit link, which we really wanted to avoid in the first place. No worries, because the third learning algorithm is coming to the rescue. What this can do is take our favorite material models from the gallery and map them onto a nice 2D plane, where we can explore similar materials. If we combine this with the neural renderer, we can explore these photorealistic visualizations, and everything appears not in a few hours, but in real time. However, without a little further guidance, we get a bit lost, because we still don't know which regions in this 2D space are going to give us materials that are similar to the one we wish to fine-tune. We can further improve this by exploring different combinations of the three learning algorithms. In the end, we can assign these colors to the background that describe either whether the AI expects us to like the output, or how similar the output will be. An ice use case of this is where we have this glassy still life scene, but the color of the grapes is a bit too vivid for us. Now, we can go to this 2D latent space and adjust it to our liking in real time. Much better. No material modeling expertise is required. So I hope you found this explanation intuitive. We tried really hard to create something that is both scientifically novel and also useful for the computer game and motion picture industry. We had to throw away hundreds of other ideas until this final system materialized. Make sure to have a look at the paper in the description, where every single element and learning algorithm is tested and evaluated one by one. If you are a journalist and you would like to write about this work, I would be most grateful, and I am also more than happy to answer questions in an interview format as well. Please reach out if you are interested. We also tried to give back to the community, so for the fellow tinkerers out there, the entirety of the paper is under the permissive creative commons license, and the full source code and pre-trained networks are also available under the even more permissive MIT license. Everyone is welcome to reuse it or build something cool on top of it. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Ejolene Ifehir."}, {"start": 4.64, "end": 9.48, "text": " Due to popular requests, here is a more intuitive explanation of our latest work."}, {"start": 9.48, "end": 14.88, "text": " Believe it or not, when I have started working on this, Two Minute Papers didn't even exist."}, {"start": 14.88, "end": 20.240000000000002, "text": " In several research areas, there are cases where we can't talk about our work until it is published."}, {"start": 20.240000000000002, "end": 24.84, "text": " I knew that the paper would not see the light of the day for quite a while, if ever,"}, {"start": 24.84, "end": 32.04, "text": " so I started Two Minute Papers to be able to keep my sanity and deliver a hopefully nice piece of work on a regular basis."}, {"start": 32.04, "end": 37.6, "text": " In the end, this took more than 3,000 work hours to complete, but it is finally here,"}, {"start": 37.6, "end": 41.04, "text": " and I am so happy to finally be able to present it to you."}, {"start": 41.04, "end": 46.879999999999995, "text": " This work is in the intersection of computer graphics and AI, which you know is among my favorites."}, {"start": 46.879999999999995, "end": 48.04, "text": " So what do we see here?"}, {"start": 48.04, "end": 52.120000000000005, "text": " This beautiful scene contains more than 100 different materials,"}, {"start": 52.12, "end": 55.879999999999995, "text": " each of which has been learned and synthesized by an AI."}, {"start": 55.879999999999995, "end": 60.8, "text": " None of these days is and then the lions are alike, each of them have a different material model."}, {"start": 60.8, "end": 68.36, "text": " The goal is to teach an AI the concept of material models, such as metals, minerals, and translucent materials."}, {"start": 68.36, "end": 73.6, "text": " Traditionally, when we are looking to create a new material model with a light simulation program,"}, {"start": 73.6, "end": 77.52, "text": " we have to fiddle with quite a few parameters, and whenever we change something,"}, {"start": 77.52, "end": 82.24, "text": " we have to wait from 40 to 60 seconds until a noise-free result appears."}, {"start": 82.24, "end": 85.24, "text": " In our solution, we don't need to play with these parameters."}, {"start": 85.24, "end": 91.32, "text": " Instead, our goal is to grab a gallery of random materials, assign a score to each of them,"}, {"start": 91.32, "end": 98.88, "text": " saying that I like this one, I didn't like that one, and get an AI to learn our preferences and recommend new materials for us."}, {"start": 98.88, "end": 104.19999999999999, "text": " This is quite useful when we are looking to synthesize not only one, but many materials."}, {"start": 104.2, "end": 109.84, "text": " So this is learning algorithm number one, and it works really well for a variety of materials."}, {"start": 109.84, "end": 114.8, "text": " However, these recommendations still have to be rendered with the light simulation program,"}, {"start": 114.8, "end": 118.8, "text": " which takes several hours for a gallery like the one you see here."}, {"start": 118.8, "end": 125.60000000000001, "text": " Here comes learning algorithm number two to the rescue, the neural network that replaces the light simulation program"}, {"start": 125.60000000000001, "end": 128.2, "text": " and creates photorealistic visualizations."}, {"start": 128.2, "end": 131.84, "text": " It is so fast, it not only does this in real time,"}, {"start": 131.84, "end": 135.08, "text": " but it is more than 10 times faster than real time."}, {"start": 135.08, "end": 137.08, "text": " We call this a neural renderer."}, {"start": 137.08, "end": 143.44, "text": " So we have a lot of material recommendations, and they are all photorealistic that we can visualize in real time."}, {"start": 143.44, "end": 149.8, "text": " However, it is always a possibility that we have a recommendation that is almost exactly what we had in mind,"}, {"start": 149.8, "end": 156.24, "text": " but need a few adjustments. That's an issue, because to do that, we would have to go back to the parameter fit link,"}, {"start": 156.24, "end": 159.08, "text": " which we really wanted to avoid in the first place."}, {"start": 159.08, "end": 163.0, "text": " No worries, because the third learning algorithm is coming to the rescue."}, {"start": 163.0, "end": 169.96, "text": " What this can do is take our favorite material models from the gallery and map them onto a nice 2D plane,"}, {"start": 169.96, "end": 172.32000000000002, "text": " where we can explore similar materials."}, {"start": 172.32000000000002, "end": 177.16000000000003, "text": " If we combine this with the neural renderer, we can explore these photorealistic visualizations,"}, {"start": 177.16000000000003, "end": 181.28, "text": " and everything appears not in a few hours, but in real time."}, {"start": 181.28, "end": 184.68, "text": " However, without a little further guidance, we get a bit lost,"}, {"start": 184.68, "end": 192.08, "text": " because we still don't know which regions in this 2D space are going to give us materials that are similar to the one we wish to fine-tune."}, {"start": 192.08, "end": 197.36, "text": " We can further improve this by exploring different combinations of the three learning algorithms."}, {"start": 197.36, "end": 204.64000000000001, "text": " In the end, we can assign these colors to the background that describe either whether the AI expects us to like the output,"}, {"start": 204.64000000000001, "end": 207.12, "text": " or how similar the output will be."}, {"start": 207.12, "end": 214.92000000000002, "text": " An ice use case of this is where we have this glassy still life scene,"}, {"start": 214.92000000000002, "end": 218.12, "text": " but the color of the grapes is a bit too vivid for us."}, {"start": 218.12, "end": 223.52, "text": " Now, we can go to this 2D latent space and adjust it to our liking in real time."}, {"start": 230.52, "end": 232.12, "text": " Much better."}, {"start": 232.12, "end": 234.92000000000002, "text": " No material modeling expertise is required."}, {"start": 234.92, "end": 237.51999999999998, "text": " So I hope you found this explanation intuitive."}, {"start": 237.51999999999998, "end": 245.92, "text": " We tried really hard to create something that is both scientifically novel and also useful for the computer game and motion picture industry."}, {"start": 245.92, "end": 250.51999999999998, "text": " We had to throw away hundreds of other ideas until this final system materialized."}, {"start": 250.51999999999998, "end": 258.12, "text": " Make sure to have a look at the paper in the description, where every single element and learning algorithm is tested and evaluated one by one."}, {"start": 258.12, "end": 262.91999999999996, "text": " If you are a journalist and you would like to write about this work, I would be most grateful,"}, {"start": 262.92, "end": 266.92, "text": " and I am also more than happy to answer questions in an interview format as well."}, {"start": 266.92, "end": 268.72, "text": " Please reach out if you are interested."}, {"start": 268.72, "end": 272.72, "text": " We also tried to give back to the community, so for the fellow tinkerers out there,"}, {"start": 272.72, "end": 277.32, "text": " the entirety of the paper is under the permissive creative commons license,"}, {"start": 277.32, "end": 284.12, "text": " and the full source code and pre-trained networks are also available under the even more permissive MIT license."}, {"start": 284.12, "end": 287.72, "text": " Everyone is welcome to reuse it or build something cool on top of it."}, {"start": 287.72, "end": 293.72, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wm8tK91k37U
This Evolving AI Finds Bugs in Games | Two Minute Papers #250
Our Patreon page: https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg The paper "Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari" by Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter is available here: https://arxiv.org/abs/1802.08842 The bug has been reproduced by a human here: Reproduction: https://www.youtube.com/watch?v=VGyeUuysyqg We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-2619483/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir. Reinforcement learning is a learning algorithm that chooses a set of actions in an environment to maximize the score. This class of techniques enables us to train an AI to master a large variety of video games and has many more cool applications. For instance, in the game of Q-Birt, at every time step, the AI has to choose the appropriate actions to control this orange character and light up all the cubes without hitting the purple enemy. This work proposes an interesting alternative to reinforcement learning and is named Evolution Strategies and it aims to train not one agent but an entire population of agents in parallel. The efficiency of this population is assessed much like how evolution works in nature and new offspring are created from the best performing candidates. Note that this is not the first paper using Evolution Strategies, this is a family of techniques that dates back to the 70s. However, an advantage of this variant is that it doesn't require long trial and error sessions to find an appropriate discount factor. But wait, what does this discount factor mean exactly? This is a number that describes whether the AI should focus only on immediate rewards at all costs or whether it should be willing to temporarily make worse decisions for a better payoff in the future. The optimal number is different for every game and depends on how much long-term planning it requires. With this evolutionary algorithm, we can skip this step entirely. And the really cool thing about this is that it is not only able to master many games, but after only 5 hours of training, it was able to find a way to abuse game mechanics in Cuba in the most creative ways. It has found a glitch where it sacrifices itself to lure the purple blob into dropping down after it. And much to our surprise, it's found that there is a bug. If it drops down from this position, it should lose a life for doing it, but due to a bug, it doesn't. It also learned another cool technique where it waits for the adversary to make a move and immediately goes the other way. Here's the same scene slowed down. It had also found and exploited another serious bug which was to the best of my knowledge previously unknown. After completing the first level, it starts jumping around in a seemingly random manner. The moment later, we see that the game does not advance to the next level, but cubes start blinking and the AI is free to score as many points as it wishes. After this video, a human player was able to reproduce this, I've put a link to it in the video description. It also found out the agile trick-in breakout where we dig a tunnel through the bricks, lean back, start reading a paper, and let physics solve the rest of the level. One of the greatest advantages of this technique is that instead of training only one agent, it works on an entire population. These agents can be trained independently, making the algorithm more parallelizable, which means that it is fast and maps really well to modern processors and graphics cards with many cores. And these algorithms are not only winning the game, they are breaking the game. Loving it. What a time to be alive. I think this is an incredible story that everyone needs to hear about. If you wish to help us with our quest and get exclusive perks for this series, please consider supporting us on Patreon. We are available through patreon.com slash two-minute papers, and the link with the details is available in the video description. We also use part of these funds to give back to the community and empower research projects and conferences. For instance, we recently sponsored a conference aimed to teach young scientists to write and present their papers at international venues. We are hoping to invest some more into upgrading our video editing rig in the near future. We also support cryptocurrencies such as Bitcoin, Ethereum and Litecoin. I am really grateful for your support. And this is why every video ends with, thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5, "text": " Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir."}, {"start": 4.5, "end": 9.4, "text": " Reinforcement learning is a learning algorithm that chooses a set of actions in an environment"}, {"start": 9.4, "end": 10.9, "text": " to maximize the score."}, {"start": 10.9, "end": 15.8, "text": " This class of techniques enables us to train an AI to master a large variety of video"}, {"start": 15.8, "end": 18.54, "text": " games and has many more cool applications."}, {"start": 18.54, "end": 23.6, "text": " For instance, in the game of Q-Birt, at every time step, the AI has to choose the appropriate"}, {"start": 23.6, "end": 28.36, "text": " actions to control this orange character and light up all the cubes without hitting"}, {"start": 28.36, "end": 29.560000000000002, "text": " the purple enemy."}, {"start": 29.56, "end": 34.44, "text": " This work proposes an interesting alternative to reinforcement learning and is named Evolution"}, {"start": 34.44, "end": 41.0, "text": " Strategies and it aims to train not one agent but an entire population of agents in parallel."}, {"start": 41.0, "end": 45.72, "text": " The efficiency of this population is assessed much like how evolution works in nature and"}, {"start": 45.72, "end": 49.36, "text": " new offspring are created from the best performing candidates."}, {"start": 49.36, "end": 54.120000000000005, "text": " Note that this is not the first paper using Evolution Strategies, this is a family of techniques"}, {"start": 54.120000000000005, "end": 55.96, "text": " that dates back to the 70s."}, {"start": 55.96, "end": 60.68, "text": " However, an advantage of this variant is that it doesn't require long trial and error"}, {"start": 60.68, "end": 64.04, "text": " sessions to find an appropriate discount factor."}, {"start": 64.04, "end": 67.48, "text": " But wait, what does this discount factor mean exactly?"}, {"start": 67.48, "end": 72.28, "text": " This is a number that describes whether the AI should focus only on immediate rewards"}, {"start": 72.28, "end": 77.6, "text": " at all costs or whether it should be willing to temporarily make worse decisions for a better"}, {"start": 77.6, "end": 79.28, "text": " payoff in the future."}, {"start": 79.28, "end": 83.92, "text": " The optimal number is different for every game and depends on how much long-term planning"}, {"start": 83.92, "end": 85.12, "text": " it requires."}, {"start": 85.12, "end": 89.04, "text": " With this evolutionary algorithm, we can skip this step entirely."}, {"start": 89.04, "end": 93.72, "text": " And the really cool thing about this is that it is not only able to master many games,"}, {"start": 93.72, "end": 98.92, "text": " but after only 5 hours of training, it was able to find a way to abuse game mechanics"}, {"start": 98.92, "end": 101.52000000000001, "text": " in Cuba in the most creative ways."}, {"start": 101.52000000000001, "end": 106.44, "text": " It has found a glitch where it sacrifices itself to lure the purple blob into dropping down"}, {"start": 106.44, "end": 107.44, "text": " after it."}, {"start": 107.44, "end": 110.52000000000001, "text": " And much to our surprise, it's found that there is a bug."}, {"start": 110.52000000000001, "end": 115.08000000000001, "text": " If it drops down from this position, it should lose a life for doing it, but due to a bug,"}, {"start": 115.08, "end": 116.2, "text": " it doesn't."}, {"start": 116.2, "end": 120.44, "text": " It also learned another cool technique where it waits for the adversary to make a move"}, {"start": 120.44, "end": 124.0, "text": " and immediately goes the other way."}, {"start": 124.0, "end": 132.84, "text": " Here's the same scene slowed down."}, {"start": 132.84, "end": 137.92, "text": " It had also found and exploited another serious bug which was to the best of my knowledge"}, {"start": 137.92, "end": 139.6, "text": " previously unknown."}, {"start": 139.6, "end": 144.4, "text": " After completing the first level, it starts jumping around in a seemingly random manner."}, {"start": 144.4, "end": 149.24, "text": " The moment later, we see that the game does not advance to the next level, but cubes"}, {"start": 149.24, "end": 154.56, "text": " start blinking and the AI is free to score as many points as it wishes."}, {"start": 154.56, "end": 158.48000000000002, "text": " After this video, a human player was able to reproduce this, I've put a link to it in"}, {"start": 158.48000000000002, "end": 159.72, "text": " the video description."}, {"start": 159.72, "end": 164.36, "text": " It also found out the agile trick-in breakout where we dig a tunnel through the bricks,"}, {"start": 164.36, "end": 169.20000000000002, "text": " lean back, start reading a paper, and let physics solve the rest of the level."}, {"start": 169.20000000000002, "end": 174.08, "text": " One of the greatest advantages of this technique is that instead of training only one agent,"}, {"start": 174.08, "end": 176.48000000000002, "text": " it works on an entire population."}, {"start": 176.48000000000002, "end": 181.32000000000002, "text": " These agents can be trained independently, making the algorithm more parallelizable, which"}, {"start": 181.32000000000002, "end": 186.64000000000001, "text": " means that it is fast and maps really well to modern processors and graphics cards with"}, {"start": 186.64000000000001, "end": 187.92000000000002, "text": " many cores."}, {"start": 187.92000000000002, "end": 192.48000000000002, "text": " And these algorithms are not only winning the game, they are breaking the game."}, {"start": 192.48000000000002, "end": 193.48000000000002, "text": " Loving it."}, {"start": 193.48000000000002, "end": 194.72000000000003, "text": " What a time to be alive."}, {"start": 194.72000000000003, "end": 198.24, "text": " I think this is an incredible story that everyone needs to hear about."}, {"start": 198.24, "end": 202.36, "text": " If you wish to help us with our quest and get exclusive perks for this series, please"}, {"start": 202.36, "end": 204.68, "text": " consider supporting us on Patreon."}, {"start": 204.68, "end": 210.64000000000001, "text": " We are available through patreon.com slash two-minute papers, and the link with the details is available"}, {"start": 210.64000000000001, "end": 212.0, "text": " in the video description."}, {"start": 212.0, "end": 216.76000000000002, "text": " We also use part of these funds to give back to the community and empower research projects"}, {"start": 216.76000000000002, "end": 218.08, "text": " and conferences."}, {"start": 218.08, "end": 223.36, "text": " For instance, we recently sponsored a conference aimed to teach young scientists to write and"}, {"start": 223.36, "end": 226.12, "text": " present their papers at international venues."}, {"start": 226.12, "end": 230.72000000000003, "text": " We are hoping to invest some more into upgrading our video editing rig in the near future."}, {"start": 230.72, "end": 235.04, "text": " We also support cryptocurrencies such as Bitcoin, Ethereum and Litecoin."}, {"start": 235.04, "end": 237.08, "text": " I am really grateful for your support."}, {"start": 237.08, "end": 241.12, "text": " And this is why every video ends with, thanks for watching and for your generous support,"}, {"start": 241.12, "end": 269.64, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fklY2nH7AJo
AI Learns Painterly Harmonization | Two Minute Papers #249
The paper "Deep Painterly Harmonization" and its source code is available here: https://arxiv.org/abs/1804.03189 https://github.com/luanfujun/deep-painterly-harmonization Pick up cool perks on Patreon: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-3129429/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fahir. When we show a photograph to someone, most of the time we are interested in sharing our memories. Graduation, family festivities, beautiful landscapes are common examples of this. With the recent ascendancy of these amazing neural-style transfer techniques, we can take a painting or any other source image and transfer the style of this image to our contents. The style is transferred, but the contents remain unchanged. This takes place by running the images through a deep neural network, which, in its deeper layers, learns about high-level concepts such as artistic style. This work has sparked a large body of follow-up research works. Feet forward real-time style transfer, temporarily coherent style transfer for videos, you name it. However, these techniques are always about taking one image for content and one for style. How about a new problem formulation where we paste in a part of a foreign image with a completely different style? For instance, if you feel that this ancient artwork is sorely missing a Captain America shield, or if Picasso's self-portrait is just not cool enough without shades, then this algorithm is for you. However, if we just drop in this part of a foreign image, anyone can immediately tell because of the differences in color and style. A previous non-AIB technique does way better, but it is still apparent that the image has been tempered with. But as you can see here, this new technique is able to do it seamlessly. It works by first performing style transfer from the painting to the new region, and then in the second step, additional refinements are made to it to make sure that the response of our neural network is similar across the entirety of the painting. It is conjectured that if the neural network is stimulated the same way by every part of the image, then there shouldn't be outlier regions that look vastly different. And as you can see here, it works remarkably well on a range of inputs. To validate this work, a user study was done that revealed that the users preferred the new technique over the older ones in 15 out of 16 images. I think it is fair to say that this work smokes the competition. But what about comparisons to real paintings? A different user study was also created to answer this question, and the answer is that users were mostly unable to identify whether the painting was tempered with. The source code is also available, so let the experiments begin. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fahir."}, {"start": 4.28, "end": 8.620000000000001, "text": " When we show a photograph to someone, most of the time we are interested in sharing our"}, {"start": 8.620000000000001, "end": 9.620000000000001, "text": " memories."}, {"start": 9.620000000000001, "end": 14.620000000000001, "text": " Graduation, family festivities, beautiful landscapes are common examples of this."}, {"start": 14.620000000000001, "end": 19.06, "text": " With the recent ascendancy of these amazing neural-style transfer techniques, we can take"}, {"start": 19.06, "end": 25.080000000000002, "text": " a painting or any other source image and transfer the style of this image to our contents."}, {"start": 25.080000000000002, "end": 28.84, "text": " The style is transferred, but the contents remain unchanged."}, {"start": 28.84, "end": 33.4, "text": " This takes place by running the images through a deep neural network, which, in its deeper"}, {"start": 33.4, "end": 38.04, "text": " layers, learns about high-level concepts such as artistic style."}, {"start": 38.04, "end": 41.36, "text": " This work has sparked a large body of follow-up research works."}, {"start": 41.36, "end": 46.8, "text": " Feet forward real-time style transfer, temporarily coherent style transfer for videos, you name"}, {"start": 46.8, "end": 47.8, "text": " it."}, {"start": 47.8, "end": 52.92, "text": " However, these techniques are always about taking one image for content and one for style."}, {"start": 52.92, "end": 58.6, "text": " How about a new problem formulation where we paste in a part of a foreign image with a completely"}, {"start": 58.6, "end": 59.96, "text": " different style?"}, {"start": 59.96, "end": 64.68, "text": " For instance, if you feel that this ancient artwork is sorely missing a Captain America"}, {"start": 64.68, "end": 70.8, "text": " shield, or if Picasso's self-portrait is just not cool enough without shades, then this"}, {"start": 70.8, "end": 72.16, "text": " algorithm is for you."}, {"start": 72.16, "end": 77.32, "text": " However, if we just drop in this part of a foreign image, anyone can immediately tell"}, {"start": 77.32, "end": 80.4, "text": " because of the differences in color and style."}, {"start": 80.4, "end": 85.6, "text": " A previous non-AIB technique does way better, but it is still apparent that the image has"}, {"start": 85.6, "end": 86.96000000000001, "text": " been tempered with."}, {"start": 86.96, "end": 90.96, "text": " But as you can see here, this new technique is able to do it seamlessly."}, {"start": 90.96, "end": 96.32, "text": " It works by first performing style transfer from the painting to the new region, and then"}, {"start": 96.32, "end": 100.88, "text": " in the second step, additional refinements are made to it to make sure that the response"}, {"start": 100.88, "end": 105.11999999999999, "text": " of our neural network is similar across the entirety of the painting."}, {"start": 105.11999999999999, "end": 109.8, "text": " It is conjectured that if the neural network is stimulated the same way by every part"}, {"start": 109.8, "end": 114.11999999999999, "text": " of the image, then there shouldn't be outlier regions that look vastly different."}, {"start": 114.12, "end": 118.16000000000001, "text": " And as you can see here, it works remarkably well on a range of inputs."}, {"start": 118.16000000000001, "end": 122.72, "text": " To validate this work, a user study was done that revealed that the users preferred the"}, {"start": 122.72, "end": 127.08000000000001, "text": " new technique over the older ones in 15 out of 16 images."}, {"start": 127.08000000000001, "end": 130.72, "text": " I think it is fair to say that this work smokes the competition."}, {"start": 130.72, "end": 133.20000000000002, "text": " But what about comparisons to real paintings?"}, {"start": 133.20000000000002, "end": 137.68, "text": " A different user study was also created to answer this question, and the answer is that"}, {"start": 137.68, "end": 142.28, "text": " users were mostly unable to identify whether the painting was tempered with."}, {"start": 142.28, "end": 145.84, "text": " The source code is also available, so let the experiments begin."}, {"start": 145.84, "end": 175.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=DglrYx9F3UU
This AI Reproduces Human Perception | Two Minute Papers #248
The paper "The Unreasonable Effectiveness of Deep Networks as a Perceptual Metric" is available here: https://richzhang.github.io/PerceptualSimilarity/ Our Patreon page: https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Other papers showcased in the video: Automatic Parameter Control for Metropolis Light Transport - https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ Gaussian Material Synthesis - https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1285294/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Carlos Zsolnai-Fahir. Assessing how similar two images are has been a long-standing problem in computer graphics. For instance, if we write a new light simulation program, we have to compare our results against the output of other algorithms and a noise-free reference image. However, this often means that we have many noisy images, but the structure of the noise is different. This leads to endless arguments on which algorithm is favorable to the others, since who really gets to decide what kind of noise is favorable and what is not. These are important and long-standing questions that we need to find answers to. In another application, we took a photorealistic material model and wanted to visualize other materials that look similar to it. However, in order to do this, we need to explain to the computer what it means that two images are similar. This is what we call a similarity metric. Have a look at this reference image and these two variants of it. Which one is more similar to it, the blurred or the worked version? Well, according to most humans, warping is considered a less intrusive operation. However, some of the most ubiquitous similarity metrics, like computing a simple per pixel difference, thinks otherwise. Not good. What about this comparison? Which image is closer to the reference? The noisy or the blurry one? Most humans say that the noisy image is more similar, perhaps because with enough patience, one could remove all the noise pixel by pixel and get back the reference image, but in the blurry image, lots of features are permanently lost. Again, the classical error metrics think otherwise. Not good. And now comes the twist. If we build a database for many of these human decisions, feed it into a deep neural network, we'll find that this network will be able to learn and predict how humans see differences in images. This is exactly what we are looking for. You can see the agreement between this new similarity metric and these example differences. However, this shows the agreement on only three images. That could easily happen by chance. So this chart shows how different techniques correlate with how humans see differences in images. The higher the number, the higher the chance that it thinks similarly to humans. The ones labeled with LPIPS denote the new proposed technique used on several different classical neural network architectures. This is really great news for all kinds of research works that include working with images. I can't wait to start experimenting with it. The paper also contains a more elaborate discussion on failure cases as well, so make sure to have a look. Also, if you would like to help us do more to spread the word about these incredible works and pick up cool perks, please consider supporting us on Patreon. Each dollar you contribute is worth more than a thousand views, which is a ton of help for the channel. We also accept crypto currencies such as Bitcoin, Ethereum and Litecoin. Details are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with Carlos Zsolnai-Fahir."}, {"start": 4.48, "end": 9.88, "text": " Assessing how similar two images are has been a long-standing problem in computer graphics."}, {"start": 9.88, "end": 15.0, "text": " For instance, if we write a new light simulation program, we have to compare our results against"}, {"start": 15.0, "end": 19.12, "text": " the output of other algorithms and a noise-free reference image."}, {"start": 19.12, "end": 23.96, "text": " However, this often means that we have many noisy images, but the structure of the noise"}, {"start": 23.96, "end": 24.96, "text": " is different."}, {"start": 24.96, "end": 29.64, "text": " This leads to endless arguments on which algorithm is favorable to the others, since who"}, {"start": 29.64, "end": 34.28, "text": " really gets to decide what kind of noise is favorable and what is not."}, {"start": 34.28, "end": 38.52, "text": " These are important and long-standing questions that we need to find answers to."}, {"start": 38.52, "end": 43.72, "text": " In another application, we took a photorealistic material model and wanted to visualize other"}, {"start": 43.72, "end": 46.2, "text": " materials that look similar to it."}, {"start": 46.2, "end": 51.08, "text": " However, in order to do this, we need to explain to the computer what it means that two"}, {"start": 51.08, "end": 52.68, "text": " images are similar."}, {"start": 52.68, "end": 55.32, "text": " This is what we call a similarity metric."}, {"start": 55.32, "end": 59.2, "text": " Have a look at this reference image and these two variants of it."}, {"start": 59.2, "end": 62.88, "text": " Which one is more similar to it, the blurred or the worked version?"}, {"start": 62.88, "end": 67.8, "text": " Well, according to most humans, warping is considered a less intrusive operation."}, {"start": 67.8, "end": 73.12, "text": " However, some of the most ubiquitous similarity metrics, like computing a simple per pixel"}, {"start": 73.12, "end": 75.36, "text": " difference, thinks otherwise."}, {"start": 75.36, "end": 76.36, "text": " Not good."}, {"start": 76.36, "end": 77.88, "text": " What about this comparison?"}, {"start": 77.88, "end": 79.88, "text": " Which image is closer to the reference?"}, {"start": 79.88, "end": 82.08, "text": " The noisy or the blurry one?"}, {"start": 82.08, "end": 87.16, "text": " Most humans say that the noisy image is more similar, perhaps because with enough patience,"}, {"start": 87.16, "end": 91.88, "text": " one could remove all the noise pixel by pixel and get back the reference image, but in"}, {"start": 91.88, "end": 95.28, "text": " the blurry image, lots of features are permanently lost."}, {"start": 95.28, "end": 98.75999999999999, "text": " Again, the classical error metrics think otherwise."}, {"start": 98.75999999999999, "end": 99.75999999999999, "text": " Not good."}, {"start": 99.75999999999999, "end": 101.24, "text": " And now comes the twist."}, {"start": 101.24, "end": 106.72, "text": " If we build a database for many of these human decisions, feed it into a deep neural network,"}, {"start": 106.72, "end": 111.64, "text": " we'll find that this network will be able to learn and predict how humans see differences"}, {"start": 111.64, "end": 112.64, "text": " in images."}, {"start": 112.64, "end": 114.56, "text": " This is exactly what we are looking for."}, {"start": 114.56, "end": 119.28, "text": " You can see the agreement between this new similarity metric and these example differences."}, {"start": 119.28, "end": 123.48, "text": " However, this shows the agreement on only three images."}, {"start": 123.48, "end": 125.68, "text": " That could easily happen by chance."}, {"start": 125.68, "end": 130.68, "text": " So this chart shows how different techniques correlate with how humans see differences"}, {"start": 130.68, "end": 131.68, "text": " in images."}, {"start": 131.68, "end": 136.08, "text": " The higher the number, the higher the chance that it thinks similarly to humans."}, {"start": 136.08, "end": 142.0, "text": " The ones labeled with LPIPS denote the new proposed technique used on several different"}, {"start": 142.0, "end": 144.36, "text": " classical neural network architectures."}, {"start": 144.36, "end": 148.4, "text": " This is really great news for all kinds of research works that include working with"}, {"start": 148.4, "end": 149.4, "text": " images."}, {"start": 149.4, "end": 151.84, "text": " I can't wait to start experimenting with it."}, {"start": 151.84, "end": 156.36, "text": " The paper also contains a more elaborate discussion on failure cases as well, so make"}, {"start": 156.36, "end": 157.52, "text": " sure to have a look."}, {"start": 157.52, "end": 161.72000000000003, "text": " Also, if you would like to help us do more to spread the word about these incredible"}, {"start": 161.72000000000003, "end": 166.24, "text": " works and pick up cool perks, please consider supporting us on Patreon."}, {"start": 166.24, "end": 170.64000000000001, "text": " Each dollar you contribute is worth more than a thousand views, which is a ton of help"}, {"start": 170.64000000000001, "end": 171.64000000000001, "text": " for the channel."}, {"start": 171.64, "end": 176.16, "text": " We also accept crypto currencies such as Bitcoin, Ethereum and Litecoin."}, {"start": 176.16, "end": 178.35999999999999, "text": " Details are available in the video description."}, {"start": 178.36, "end": 205.12, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=gvjCu7zszbQ
This AI Learns From Its Dreams | Two Minute Papers #247
The paper "World Models" is available here: https://arxiv.org/abs/1803.10122 https://worldmodels.github.io/ Support the series and pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-3077928/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karojolnai-Fehir. Today, we are going to talk about an AI that not only plays video games really well, but can also dream up new, unseen scenarios, and more. This is an interesting new framework that contains a vision model that compresses what it has seen in the game into an internal code. As you see here, these latent variables are responsible to capture different level designs. And this variable simulates time and shows how the fireballs move towards us over time. This is a highly compressed internal representation that captures the most important aspects of the game. We also have a memory unit that not only stores previous experiences, but similarly to how an earlier work predicted the next pan strokes of a drawing, this can also dream up new gameplay. Finally, it is also endowed with a controller unit that is responsible for making decisions as to how to play the game. Here, you see the algorithm in action. On the left, there is the actual gameplay, and on the right, you see its compressed internal representation. This is how the AI thinks about the game. The point is that it is lossy, therefore some information is lost, but the essence of the game is retained. So, this sounds great, the novelty is clear, but how well does it play the game? Well, in this racing game, on a selection of 100 random tracks, its average score is almost 3 times that of DeepMind's groundbreaking DeepQ learning algorithm. This was the AI that took the world by storm when DeepMind demonstrated how it learned to play Atari Breakout and many other games on a superhuman level. This is almost 3 times better than that on the racetrack game, though it is to be noted that DeepMind has also made great strides since their original DQ and work. And now comes the even more exciting part, because it can create an internal dream representation of the game, and this representation really captures the essence of the game, then it means that it is also able to play and train within these dreams. Essentially, it makes up dream scenarios and learns how to deal with them without playing the actual game. It is a bit like how we prepare for a first date, imagining what to say and how to say it, or imagining how we would incapacitate an attacker with our karate chops if someone were to attack us. And the cool thing is that with this AI, this dream training actually works, which means that the newly learned dream strategies translate really well to the real game. We really have only scratched the surface, so make sure to read the paper in the description. This is a really new and fresh idea, and I think it will give birth to a number of follow-up papers. Cannot wait to report on these back to you, so stay tuned and make sure to subscribe and hit the bell icon to never miss an episode. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karojolnai-Fehir."}, {"start": 4.0, "end": 9.0, "text": " Today, we are going to talk about an AI that not only plays video games really well,"}, {"start": 9.0, "end": 13.0, "text": " but can also dream up new, unseen scenarios, and more."}, {"start": 13.0, "end": 16.0, "text": " This is an interesting new framework that contains a vision model"}, {"start": 16.0, "end": 20.0, "text": " that compresses what it has seen in the game into an internal code."}, {"start": 20.0, "end": 26.0, "text": " As you see here, these latent variables are responsible to capture different level designs."}, {"start": 26.0, "end": 31.0, "text": " And this variable simulates time and shows how the fireballs move towards us over time."}, {"start": 31.0, "end": 37.0, "text": " This is a highly compressed internal representation that captures the most important aspects of the game."}, {"start": 37.0, "end": 41.0, "text": " We also have a memory unit that not only stores previous experiences,"}, {"start": 41.0, "end": 46.0, "text": " but similarly to how an earlier work predicted the next pan strokes of a drawing,"}, {"start": 46.0, "end": 49.0, "text": " this can also dream up new gameplay."}, {"start": 49.0, "end": 54.0, "text": " Finally, it is also endowed with a controller unit that is responsible for making decisions"}, {"start": 54.0, "end": 56.0, "text": " as to how to play the game."}, {"start": 56.0, "end": 58.0, "text": " Here, you see the algorithm in action."}, {"start": 58.0, "end": 64.0, "text": " On the left, there is the actual gameplay, and on the right, you see its compressed internal representation."}, {"start": 64.0, "end": 67.0, "text": " This is how the AI thinks about the game."}, {"start": 67.0, "end": 71.0, "text": " The point is that it is lossy, therefore some information is lost,"}, {"start": 71.0, "end": 74.0, "text": " but the essence of the game is retained."}, {"start": 74.0, "end": 78.0, "text": " So, this sounds great, the novelty is clear, but how well does it play the game?"}, {"start": 78.0, "end": 82.0, "text": " Well, in this racing game, on a selection of 100 random tracks,"}, {"start": 82.0, "end": 88.0, "text": " its average score is almost 3 times that of DeepMind's groundbreaking DeepQ learning algorithm."}, {"start": 88.0, "end": 94.0, "text": " This was the AI that took the world by storm when DeepMind demonstrated how it learned to play Atari Breakout"}, {"start": 94.0, "end": 97.0, "text": " and many other games on a superhuman level."}, {"start": 97.0, "end": 101.0, "text": " This is almost 3 times better than that on the racetrack game,"}, {"start": 101.0, "end": 106.0, "text": " though it is to be noted that DeepMind has also made great strides since their original DQ and work."}, {"start": 106.0, "end": 109.0, "text": " And now comes the even more exciting part,"}, {"start": 109.0, "end": 113.0, "text": " because it can create an internal dream representation of the game,"}, {"start": 113.0, "end": 117.0, "text": " and this representation really captures the essence of the game,"}, {"start": 117.0, "end": 122.0, "text": " then it means that it is also able to play and train within these dreams."}, {"start": 122.0, "end": 128.0, "text": " Essentially, it makes up dream scenarios and learns how to deal with them without playing the actual game."}, {"start": 128.0, "end": 133.0, "text": " It is a bit like how we prepare for a first date, imagining what to say and how to say it,"}, {"start": 133.0, "end": 139.0, "text": " or imagining how we would incapacitate an attacker with our karate chops if someone were to attack us."}, {"start": 139.0, "end": 144.0, "text": " And the cool thing is that with this AI, this dream training actually works,"}, {"start": 144.0, "end": 149.0, "text": " which means that the newly learned dream strategies translate really well to the real game."}, {"start": 149.0, "end": 153.0, "text": " We really have only scratched the surface, so make sure to read the paper in the description."}, {"start": 153.0, "end": 159.0, "text": " This is a really new and fresh idea, and I think it will give birth to a number of follow-up papers."}, {"start": 159.0, "end": 166.0, "text": " Cannot wait to report on these back to you, so stay tuned and make sure to subscribe and hit the bell icon to never miss an episode."}, {"start": 166.0, "end": 193.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UMSNBLAfC7o
This Robot Adapts Like Animals | Two Minute Papers #246
The paper "Robots that can adapt like animals" and its source code is available here: https://members.loria.fr/jbmouret/nature_press.html https://members.loria.fr/code/ite_limbo_nature.zip Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo-Ijola-Ifahir. This work is about building a robot that works even when being damaged, and you will see that the results are just unreal. There are many important applications for such a robot, where sending out humans may be too risky, such as putting out forest fires, finding earthquake survivors under rubble, or shutting down a malfunctioning nuclear plant. Since these are all dangerous use cases, it is a requirement that such a robot works even when damaged. The key idea to accomplish this is that we allow the robot to perform tasks such as walking not only in one optimal way, but to explore and build a map of many alternative motions relying on different body parts. Some of these limping motions are clearly not optimal, but whenever damage happens to the robot, it will immediately be able to choose at least one alternative way to move around even with broken or missing legs. After building the map, it can be used as additional knowledge to lean on when the damage occurs, and the robot doesn't have to relearn everything from scratch. This is great, especially given that damage usually happens in the presence of danger, and in these cases reacting quickly can be a matter of life and death. However, creating such a map takes a ton of trial and error, potentially more than what we can realistically get the robot to perform. And now comes my favorite part, which is starting the project in a computer simulation, and then in the next step, deploying the trained AI to a real robot. This previously mentioned map of movements contains over 13,000 different kinds of gates, and since we are in a simulation, it can be computed efficiently and conveniently. In software, we can also simulate all kinds of damage for free without dismembering our real robot. And since no simulation is perfect, after this step, the AI is deployed to the real robot that evaluates and adjusts to the differences. By the way, this is the same robot that surprised us in a previous episode when it showed that it can walk around just fine without any food contact with the ground by jumping on its back and using its elbows. I can only imagine how much work this project took, and the results speak for themselves. It is also very easy to see the immediate utility of such a project. Bravo! I also recommend looking at the press materials. For instance, in the frequently asked questions, many common misunderstandings are addressed. For instance, it is noted that the robot doesn't understand the kind of damage that occurred, and doesn't repair itself in the strict sense, but it tries to find alternative ways to function. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo-Ijola-Ifahir."}, {"start": 4.32, "end": 9.44, "text": " This work is about building a robot that works even when being damaged, and you will see"}, {"start": 9.44, "end": 11.92, "text": " that the results are just unreal."}, {"start": 11.92, "end": 15.92, "text": " There are many important applications for such a robot, where sending out humans may be"}, {"start": 15.92, "end": 21.400000000000002, "text": " too risky, such as putting out forest fires, finding earthquake survivors under rubble,"}, {"start": 21.400000000000002, "end": 24.68, "text": " or shutting down a malfunctioning nuclear plant."}, {"start": 24.68, "end": 29.8, "text": " Since these are all dangerous use cases, it is a requirement that such a robot works even"}, {"start": 29.8, "end": 30.8, "text": " when damaged."}, {"start": 30.8, "end": 35.660000000000004, "text": " The key idea to accomplish this is that we allow the robot to perform tasks such as"}, {"start": 35.660000000000004, "end": 41.36, "text": " walking not only in one optimal way, but to explore and build a map of many alternative"}, {"start": 41.36, "end": 44.24, "text": " motions relying on different body parts."}, {"start": 44.24, "end": 48.92, "text": " Some of these limping motions are clearly not optimal, but whenever damage happens to"}, {"start": 48.92, "end": 54.16, "text": " the robot, it will immediately be able to choose at least one alternative way to move around"}, {"start": 54.16, "end": 56.64, "text": " even with broken or missing legs."}, {"start": 56.64, "end": 61.8, "text": " After building the map, it can be used as additional knowledge to lean on when the damage occurs,"}, {"start": 61.8, "end": 65.12, "text": " and the robot doesn't have to relearn everything from scratch."}, {"start": 65.12, "end": 69.88, "text": " This is great, especially given that damage usually happens in the presence of danger, and"}, {"start": 69.88, "end": 74.12, "text": " in these cases reacting quickly can be a matter of life and death."}, {"start": 74.12, "end": 79.14, "text": " However, creating such a map takes a ton of trial and error, potentially more than what"}, {"start": 79.14, "end": 82.0, "text": " we can realistically get the robot to perform."}, {"start": 82.0, "end": 87.52, "text": " And now comes my favorite part, which is starting the project in a computer simulation, and"}, {"start": 87.52, "end": 92.08, "text": " then in the next step, deploying the trained AI to a real robot."}, {"start": 92.08, "end": 97.72, "text": " This previously mentioned map of movements contains over 13,000 different kinds of gates,"}, {"start": 97.72, "end": 102.68, "text": " and since we are in a simulation, it can be computed efficiently and conveniently."}, {"start": 102.68, "end": 107.76, "text": " In software, we can also simulate all kinds of damage for free without dismembering our"}, {"start": 107.76, "end": 109.12, "text": " real robot."}, {"start": 109.12, "end": 114.16000000000001, "text": " And since no simulation is perfect, after this step, the AI is deployed to the real robot"}, {"start": 114.16000000000001, "end": 117.2, "text": " that evaluates and adjusts to the differences."}, {"start": 117.2, "end": 121.96000000000001, "text": " By the way, this is the same robot that surprised us in a previous episode when it showed that"}, {"start": 121.96000000000001, "end": 127.08000000000001, "text": " it can walk around just fine without any food contact with the ground by jumping on its"}, {"start": 127.08000000000001, "end": 129.12, "text": " back and using its elbows."}, {"start": 129.12, "end": 134.16, "text": " I can only imagine how much work this project took, and the results speak for themselves."}, {"start": 134.16, "end": 138.52, "text": " It is also very easy to see the immediate utility of such a project."}, {"start": 138.52, "end": 139.52, "text": " Bravo!"}, {"start": 139.52, "end": 141.76000000000002, "text": " I also recommend looking at the press materials."}, {"start": 141.76000000000002, "end": 146.52, "text": " For instance, in the frequently asked questions, many common misunderstandings are addressed."}, {"start": 146.52, "end": 151.28, "text": " For instance, it is noted that the robot doesn't understand the kind of damage that occurred,"}, {"start": 151.28, "end": 155.76000000000002, "text": " and doesn't repair itself in the strict sense, but it tries to find alternative ways to"}, {"start": 155.76000000000002, "end": 156.76000000000002, "text": " function."}, {"start": 156.76, "end": 183.6, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=m9XyXiL6n8w
AI Learns Real-Time 3D Face Reconstruction | Two Minute Papers #245
The paper "Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network" and its source code is available here: https://arxiv.org/abs/1803.07835 https://github.com/YadiraF/PRNet Addicted? Pick up cool perks on our Patreon page! - https://www.patreon.com/TwoMinutePapers A few comments with some of the best applications: Lowell Camp - "This technology could be used for consumer-budget markerless facial motion capture, and if a follow-up paper enhances it with audio analysis for tongue posing, then it would require very little touch-up beyond a little temporal filtering." Milleoiseau - "VOIP in game but with face tracking." Evan - "Could this be used for some kind of automatic lip-reading system for deaf viewers to view live events?" Matan - "Monitor emotions for product improvement." Idjles Erle - "Reconstructing ancestors faces from photos that are 150 years old. Working out from old photos who is more likely rested to whom." Morph Verse - "Maybe create a toolsets for artists to support easy correct anatomy tools in characters with facial and body features, for faster workflow in apps like Blender or 3ds." Bernard van Tonder - "Encourage others to watch educational content: Let celebrities/sport idols teach important subjects by mapping their faces and voices onto people's faces in educational videos." Adam de Anda - "Online shopping could get much more personalized. Send a selfie and be able to see sunglasses, hats, jewelry etc on your own face and able to rotate the image. Damn this actually pretty solid" We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-1722556/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. Today, we have two extremely hard problems on the menu. One is facial alignment and the other is 3D facial reconstruction. For both problems, we have an image as an input and the output should be either a few lines that mark the orientation of the jawline, mouth and eyes, and in the other case, we are looking for a full 3D computer model of the face. And all this should happen automatically without any user intervention. This is extremely difficult because this means that we need an algorithm that takes a 2D image and somehow captures 3D information from this 2D projection much like a human would. This all sounds great and would be super useful in creating 3D avatars for Skype calls or scanning real humans to place them in digital media such as feature movies and games. This would be amazing, but is this really possible? This work uses a convolutional neural network to accomplish this and it not only provides high quality outputs, but it creates them in less than 10 milliseconds per image, which means that it can process a hundred of them every second. That is great news indeed because it also means that doing this for video in real time is also a possibility. But not so fast because if we are talking about video, no requirements arise. For instance, it is important that such a technique is resilient against changes in lighting. This means that if we have different lighting conditions, the output geometry the algorithm gives us shouldn't be widely different. The same applies to camera and pose as well. This algorithm is resilient against all 3 and it has some additional goodies. For instance, it finds the eyes properly through glasses and can deal with cases where the jawline is occluded by the hair or in furate shape when one side is not visible at all. One of the key ideas is to give additional instruction to the convolutional neural network to focus more of its efforts to reconstruct the center regions of the face because that region contains more discriminative features. The paper also contains a study that details the performance of this algorithm. It reveals that it is not only 5 to 8 times faster than the competition, but also provides higher quality solutions. Since these are likely to be deployed in real world applications very soon, it is a good time to start brainstorming about possible applications for this. If you have ideas beyond the animation movies and games line, let me know in the comment section. I will put the best ones in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.0, "end": 7.0, "text": " Today, we have two extremely hard problems on the menu."}, {"start": 7.0, "end": 12.0, "text": " One is facial alignment and the other is 3D facial reconstruction."}, {"start": 12.0, "end": 17.0, "text": " For both problems, we have an image as an input and the output should be either a few lines"}, {"start": 17.0, "end": 22.0, "text": " that mark the orientation of the jawline, mouth and eyes, and in the other case,"}, {"start": 22.0, "end": 26.0, "text": " we are looking for a full 3D computer model of the face."}, {"start": 26.0, "end": 30.0, "text": " And all this should happen automatically without any user intervention."}, {"start": 30.0, "end": 35.0, "text": " This is extremely difficult because this means that we need an algorithm that takes a 2D image"}, {"start": 35.0, "end": 41.0, "text": " and somehow captures 3D information from this 2D projection much like a human would."}, {"start": 41.0, "end": 47.0, "text": " This all sounds great and would be super useful in creating 3D avatars for Skype calls"}, {"start": 47.0, "end": 53.0, "text": " or scanning real humans to place them in digital media such as feature movies and games."}, {"start": 53.0, "end": 56.0, "text": " This would be amazing, but is this really possible?"}, {"start": 56.0, "end": 59.0, "text": " This work uses a convolutional neural network to accomplish this"}, {"start": 59.0, "end": 66.0, "text": " and it not only provides high quality outputs, but it creates them in less than 10 milliseconds per image,"}, {"start": 66.0, "end": 70.0, "text": " which means that it can process a hundred of them every second."}, {"start": 70.0, "end": 76.0, "text": " That is great news indeed because it also means that doing this for video in real time is also a possibility."}, {"start": 76.0, "end": 81.0, "text": " But not so fast because if we are talking about video, no requirements arise."}, {"start": 81.0, "end": 86.0, "text": " For instance, it is important that such a technique is resilient against changes in lighting."}, {"start": 86.0, "end": 91.0, "text": " This means that if we have different lighting conditions, the output geometry the algorithm gives us"}, {"start": 91.0, "end": 95.0, "text": " shouldn't be widely different. The same applies to camera and pose as well."}, {"start": 95.0, "end": 100.0, "text": " This algorithm is resilient against all 3 and it has some additional goodies."}, {"start": 100.0, "end": 105.0, "text": " For instance, it finds the eyes properly through glasses and can deal with cases"}, {"start": 105.0, "end": 111.0, "text": " where the jawline is occluded by the hair or in furate shape when one side is not visible at all."}, {"start": 111.0, "end": 116.0, "text": " One of the key ideas is to give additional instruction to the convolutional neural network"}, {"start": 116.0, "end": 120.0, "text": " to focus more of its efforts to reconstruct the center regions of the face"}, {"start": 120.0, "end": 123.0, "text": " because that region contains more discriminative features."}, {"start": 123.0, "end": 128.0, "text": " The paper also contains a study that details the performance of this algorithm."}, {"start": 128.0, "end": 132.0, "text": " It reveals that it is not only 5 to 8 times faster than the competition,"}, {"start": 132.0, "end": 135.0, "text": " but also provides higher quality solutions."}, {"start": 135.0, "end": 139.0, "text": " Since these are likely to be deployed in real world applications very soon,"}, {"start": 139.0, "end": 143.0, "text": " it is a good time to start brainstorming about possible applications for this."}, {"start": 143.0, "end": 148.0, "text": " If you have ideas beyond the animation movies and games line, let me know in the comment section."}, {"start": 148.0, "end": 150.0, "text": " I will put the best ones in the video description."}, {"start": 150.0, "end": 171.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XcxzKLrCpyk
AI Photo Translation | Two Minute Papers #243
The paper "Toward Multimodal Image-to-Image Translation" and its source code is available here: https://junyanz.github.io/BicycleGAN/ Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-2985977/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Recently, a new breed of AI technique surfaced that were capable of this new thing called image translation. And by image translation, I mean that they can translate a drawn map to a satellite image, take a set of colored labels and make a photorealistic facade, or take a sketch and create a photo out of it. This is done through a generative adversarial network. This is an architecture where we have a pair of neural networks, one that learns to generate new images, and the other learns to tell a fake image from a real one. As they compete against each other, they get better and better without any human interaction. In these earlier applications, unfortunately, the output is typically one image and since there are many possible shoes that could satisfy our initial sketch, it is highly unlikely that the one we are offered is exactly what we envisioned. This improved version and has this algorithm to be able to produce not one, but an entire set of outputs. And as you can see here, we have a night image and a set of potential daytime translations on the right that are quite diverse. I really like how it has an intuitive understanding of the illumination differences of the building during night and daytime. It really seems to know how to add lighting to the building. It also models the atmospheric scattering during daytime, creates multiple kinds of pretty convincing clouds or puts heels in the background. The results are both realistic and the additional selling point is that this technique offers an entire selection of outputs. What I found to be really cool about the next comparisons is that the ground truth images are also attached for reference. If we can take a photograph of a city at night time, we have access to the same view during the daytime too, or we can take a photograph of a shoe and draw the outline of it by hand. As you can see here, there are not only lots of high quality outputs, but in some cases, the ground truth image is really well approximated by the algorithm. This means that we give it a crew drawing and it could translate this drawing into a photorealistic image, I think that is mind blowing. The validation section of the paper reveals that this technique provides a great trade-off between diversity and quality. There are previous methods that perform well if we need one high quality solution or many not-so-great ones, but overall this one provides a great package for artists working in the industry and this will be a godsend for any kind of content creation scenario. The source code of this project is also available and make sure to read the license before starting your experiments. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.3, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.3, "end": 9.34, "text": " Recently, a new breed of AI technique surfaced that were capable of this new thing called"}, {"start": 9.34, "end": 10.94, "text": " image translation."}, {"start": 10.94, "end": 16.44, "text": " And by image translation, I mean that they can translate a drawn map to a satellite image,"}, {"start": 16.44, "end": 22.66, "text": " take a set of colored labels and make a photorealistic facade, or take a sketch and create a photo"}, {"start": 22.66, "end": 23.740000000000002, "text": " out of it."}, {"start": 23.740000000000002, "end": 26.740000000000002, "text": " This is done through a generative adversarial network."}, {"start": 26.74, "end": 31.18, "text": " This is an architecture where we have a pair of neural networks, one that learns to generate"}, {"start": 31.18, "end": 36.099999999999994, "text": " new images, and the other learns to tell a fake image from a real one."}, {"start": 36.099999999999994, "end": 41.0, "text": " As they compete against each other, they get better and better without any human interaction."}, {"start": 41.0, "end": 46.3, "text": " In these earlier applications, unfortunately, the output is typically one image and since"}, {"start": 46.3, "end": 51.34, "text": " there are many possible shoes that could satisfy our initial sketch, it is highly unlikely"}, {"start": 51.34, "end": 54.9, "text": " that the one we are offered is exactly what we envisioned."}, {"start": 54.9, "end": 59.9, "text": " This improved version and has this algorithm to be able to produce not one, but an entire"}, {"start": 59.9, "end": 61.66, "text": " set of outputs."}, {"start": 61.66, "end": 66.3, "text": " And as you can see here, we have a night image and a set of potential daytime translations"}, {"start": 66.3, "end": 68.66, "text": " on the right that are quite diverse."}, {"start": 68.66, "end": 73.5, "text": " I really like how it has an intuitive understanding of the illumination differences of the building"}, {"start": 73.5, "end": 75.5, "text": " during night and daytime."}, {"start": 75.5, "end": 78.78, "text": " It really seems to know how to add lighting to the building."}, {"start": 78.78, "end": 83.94, "text": " It also models the atmospheric scattering during daytime, creates multiple kinds of pretty"}, {"start": 83.94, "end": 87.78, "text": " convincing clouds or puts heels in the background."}, {"start": 87.78, "end": 92.62, "text": " The results are both realistic and the additional selling point is that this technique offers an"}, {"start": 92.62, "end": 94.9, "text": " entire selection of outputs."}, {"start": 94.9, "end": 99.17999999999999, "text": " What I found to be really cool about the next comparisons is that the ground truth images"}, {"start": 99.17999999999999, "end": 101.3, "text": " are also attached for reference."}, {"start": 101.3, "end": 106.14, "text": " If we can take a photograph of a city at night time, we have access to the same view during"}, {"start": 106.14, "end": 112.18, "text": " the daytime too, or we can take a photograph of a shoe and draw the outline of it by hand."}, {"start": 112.18, "end": 116.74000000000001, "text": " As you can see here, there are not only lots of high quality outputs, but in some cases,"}, {"start": 116.74000000000001, "end": 120.78, "text": " the ground truth image is really well approximated by the algorithm."}, {"start": 120.78, "end": 125.74000000000001, "text": " This means that we give it a crew drawing and it could translate this drawing into a photorealistic"}, {"start": 125.74000000000001, "end": 128.62, "text": " image, I think that is mind blowing."}, {"start": 128.62, "end": 133.22, "text": " The validation section of the paper reveals that this technique provides a great trade-off"}, {"start": 133.22, "end": 135.54000000000002, "text": " between diversity and quality."}, {"start": 135.54000000000002, "end": 141.10000000000002, "text": " There are previous methods that perform well if we need one high quality solution or many"}, {"start": 141.1, "end": 146.26, "text": " not-so-great ones, but overall this one provides a great package for artists working in the"}, {"start": 146.26, "end": 151.18, "text": " industry and this will be a godsend for any kind of content creation scenario."}, {"start": 151.18, "end": 155.66, "text": " The source code of this project is also available and make sure to read the license before starting"}, {"start": 155.66, "end": 156.66, "text": " your experiments."}, {"start": 156.66, "end": 186.22, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=GdTBqBnqhaQ
4 Experiments Where the AI Outsmarted Its Creators! 🤖
The paper "The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities" is available here: https://arxiv.org/abs/1803.03453 ❤️ Support the show on Patreon: https://www.patreon.com/TwoMinutePapers Other video resources: Evolving AI Lab - https://www.youtube.com/watch?v=_5Y1hSLhYdY&feature=youtu.be Cooperative footage - https://infoscience.epfl.ch/record/99661/files/florenoetal_preprint.pdf We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-3010727/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kato Yjona Ifehir. Today, I am really excited to show you four experiments where AI researchers were baffled by the creativity and unexpected actions of their own creations. You better hold on to your papers. In the first experiment, robots were asked to walk around while minimizing the amount of food contact with the ground. Much to the scientist's surprise, the robots answered that this can be done with 0% contact, meaning that they never, ever touched the ground with the feet. The scientist wondered how that is even possible and pulled up the video of the proof. This proof showed a robot flipping over and walking using its elbows. Talk about thinking outside the box. Wow! A different robot arm experiment also came to a surprising conclusion. At first, the robot arm had to use its grippers to grab a cube which it successfully learned to perform. However, in a later experiment, the gripper was crippled, making the robot unable to open its fingers. Scientists expected a pathetic video with the robot trying to push the box around and always failing to pick up the cube. Instead, they have found this. You see it right? Instead of using the fingers, the robot finds the perfect angle to smash the hand against the box to force the gripper to open and pick up the box. That is some serious dedication to solving the task at hand. Bravo! In the next experiment, a group of robots were tasked to find food and avoid poisonous objects in an environment and were equipped with the light and no further instructions. First, they learned to use the lights to communicate the presence of food and poison to each other and cooperate. This demonstrates that when trying to maximize the probability of the survival of an entire colony, the concept of communication and cooperation can emerge even from simple neural networks. Absolutely beautiful! And what is even more incredible is that later, when a new reward system was created that fosters self-preservation, the robots learned to deceive each other by lighting up the food signal near the poison to take out their competitors and increase their chances. And these behaviors emerge from a reward system and a few simple neural networks, mind-blowing. A different AI was asked to fix a faulty sorting computer program. Soon, it achieved a perfect score without changing anything because it noticed that by short circuiting the program itself, it always provides an empty output. And of course, you know, if there are no numbers, there is nothing to sort. Problem solved. Make sure to have a look at the paper, there are many more experiments that went similarly, including a case where the AI found a bug in a physics simulation program to get an edge. And that research improving gets such a rapid pace. It is clearly capable of things that surpasses our wildest imagination, but we have to make sure to formulate our problems with proper caution because the AI will try to use loopholes instead of common sense to solve them. When in a car chase, don't ask the car AI to unload all unnecessary weights to go faster, or if you do, prepare to be promptly ejected from the car. If you have enjoyed this episode, please make sure to have a look at our Patreon page in the video description where you can pick up really cool perks like early access to these videos or getting your name shown in the video description and more. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.08, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Yjona Ifehir."}, {"start": 4.08, "end": 9.56, "text": " Today, I am really excited to show you four experiments where AI researchers were baffled"}, {"start": 9.56, "end": 13.88, "text": " by the creativity and unexpected actions of their own creations."}, {"start": 13.88, "end": 15.8, "text": " You better hold on to your papers."}, {"start": 15.8, "end": 20.64, "text": " In the first experiment, robots were asked to walk around while minimizing the amount of"}, {"start": 20.64, "end": 22.68, "text": " food contact with the ground."}, {"start": 22.68, "end": 28.52, "text": " Much to the scientist's surprise, the robots answered that this can be done with 0% contact,"}, {"start": 28.52, "end": 31.96, "text": " meaning that they never, ever touched the ground with the feet."}, {"start": 31.96, "end": 37.2, "text": " The scientist wondered how that is even possible and pulled up the video of the proof."}, {"start": 37.2, "end": 41.879999999999995, "text": " This proof showed a robot flipping over and walking using its elbows."}, {"start": 41.879999999999995, "end": 44.28, "text": " Talk about thinking outside the box."}, {"start": 44.28, "end": 45.8, "text": " Wow!"}, {"start": 45.8, "end": 49.879999999999995, "text": " A different robot arm experiment also came to a surprising conclusion."}, {"start": 49.879999999999995, "end": 55.04, "text": " At first, the robot arm had to use its grippers to grab a cube which it successfully learned"}, {"start": 55.04, "end": 56.04, "text": " to perform."}, {"start": 56.04, "end": 61.64, "text": " However, in a later experiment, the gripper was crippled, making the robot unable to open"}, {"start": 61.64, "end": 63.28, "text": " its fingers."}, {"start": 63.28, "end": 67.96, "text": " Scientists expected a pathetic video with the robot trying to push the box around and"}, {"start": 67.96, "end": 70.36, "text": " always failing to pick up the cube."}, {"start": 70.36, "end": 73.92, "text": " Instead, they have found this."}, {"start": 73.92, "end": 75.24, "text": " You see it right?"}, {"start": 75.24, "end": 79.92, "text": " Instead of using the fingers, the robot finds the perfect angle to smash the hand against"}, {"start": 79.92, "end": 83.88, "text": " the box to force the gripper to open and pick up the box."}, {"start": 83.88, "end": 87.64, "text": " That is some serious dedication to solving the task at hand."}, {"start": 87.64, "end": 88.64, "text": " Bravo!"}, {"start": 88.64, "end": 93.64, "text": " In the next experiment, a group of robots were tasked to find food and avoid poisonous"}, {"start": 93.64, "end": 98.88, "text": " objects in an environment and were equipped with the light and no further instructions."}, {"start": 98.88, "end": 103.6, "text": " First, they learned to use the lights to communicate the presence of food and poison to"}, {"start": 103.6, "end": 105.72, "text": " each other and cooperate."}, {"start": 105.72, "end": 110.47999999999999, "text": " This demonstrates that when trying to maximize the probability of the survival of an entire"}, {"start": 110.47999999999999, "end": 113.8, "text": " colony, the concept of communication and cooperation"}, {"start": 113.8, "end": 117.0, "text": " can emerge even from simple neural networks."}, {"start": 117.0, "end": 118.6, "text": " Absolutely beautiful!"}, {"start": 118.6, "end": 123.32, "text": " And what is even more incredible is that later, when a new reward system was created that"}, {"start": 123.32, "end": 128.96, "text": " fosters self-preservation, the robots learned to deceive each other by lighting up the"}, {"start": 128.96, "end": 134.35999999999999, "text": " food signal near the poison to take out their competitors and increase their chances."}, {"start": 134.35999999999999, "end": 141.32, "text": " And these behaviors emerge from a reward system and a few simple neural networks, mind-blowing."}, {"start": 141.32, "end": 145.51999999999998, "text": " A different AI was asked to fix a faulty sorting computer program."}, {"start": 145.51999999999998, "end": 150.79999999999998, "text": " Soon, it achieved a perfect score without changing anything because it noticed that by"}, {"start": 150.79999999999998, "end": 155.51999999999998, "text": " short circuiting the program itself, it always provides an empty output."}, {"start": 155.51999999999998, "end": 159.35999999999999, "text": " And of course, you know, if there are no numbers, there is nothing to sort."}, {"start": 159.35999999999999, "end": 160.35999999999999, "text": " Problem solved."}, {"start": 160.35999999999999, "end": 164.51999999999998, "text": " Make sure to have a look at the paper, there are many more experiments that went similarly,"}, {"start": 164.51999999999998, "end": 169.28, "text": " including a case where the AI found a bug in a physics simulation program to get an"}, {"start": 169.28, "end": 170.28, "text": " edge."}, {"start": 170.28, "end": 172.96, "text": " And that research improving gets such a rapid pace."}, {"start": 172.96, "end": 177.84, "text": " It is clearly capable of things that surpasses our wildest imagination, but we have to make"}, {"start": 177.84, "end": 183.24, "text": " sure to formulate our problems with proper caution because the AI will try to use loopholes"}, {"start": 183.24, "end": 185.52, "text": " instead of common sense to solve them."}, {"start": 185.52, "end": 191.4, "text": " When in a car chase, don't ask the car AI to unload all unnecessary weights to go faster,"}, {"start": 191.4, "end": 194.68, "text": " or if you do, prepare to be promptly ejected from the car."}, {"start": 194.68, "end": 198.2, "text": " If you have enjoyed this episode, please make sure to have a look at our Patreon page"}, {"start": 198.2, "end": 202.64, "text": " in the video description where you can pick up really cool perks like early access to"}, {"start": 202.64, "end": 207.2, "text": " these videos or getting your name shown in the video description and more."}, {"start": 207.2, "end": 236.76, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6FzVhIV_t3s
Gaussian Material Synthesis (SIGGRAPH 2018)
In this work, we teach an AI the concept of metallic, translucent materials and more. The paper "Gaussian Material Synthesis" and its source code is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ Acknowledgments: We would like to thank Robin Marin for the material test scene and Vlad Miller for his help with geometry modeling, Felícia Zsolnai-Fehér for improving the design of many figures, Hiroyuki Sakai, Christian Freude, Johannes Unterguggenberger, Pranav Shyam and Minh Dang for their useful comments, and Silvana Podaras for her help with a previous version of this work. We also thank NVIDIA for providing the GPU used to train our neural networks. This work was partially funded by Austrian Science Fund (FWF), project number P27974. Scene and geometry credits: Gold Bars – JohnsonMartin, Christmas Ornaments – oenvoyage, Banana – sgamusse, Bowl – metalix, Grapes – PickleJones, Glass Fruits – BobReed64, Ice cream – b2przemo, Vases – Technausea, Break Time – Jay-Artist, Wrecking Ball – floydkids, Italian Still Life – aXel, Microplanet – marekv, Microplanet vegetation – macio. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #neuralrendering
Creating high-quality photorealistic materials for light transport simulations typically includes direct hands-on interaction with a principal shader. This means that the user has to tweak a large number of material properties by hand and has to wait for a new image of it to be rendered after each interaction. This requires a fair bit of expertise and the best setups are often obtained through a lengthy trial and error process. To enhance this workflow, we present a learning-based system for rapid mass-scale material synthesis. First, the user is presented with a gallery of materials and the assigned scores are shown in the upper left. Here, we learn the concept of glassy and transparent materials. By learning on only a few tens of high-scoring samples, our system is able to recommend many new materials from the learn distributions. The learning step typically takes a few seconds where the recommendations take negligible time and can be done on a mass scale. Then, these recommendations can be used to populate a scene with materials. Typically, each recommendation takes 40 to 60 seconds to render with global illumination, which is clearly unacceptable for real-world workflows, even for mid-size galleries. In the next step, we propose a convolutional neural network that is able to predict images of these materials that are close to the ones generated via global illumination and takes less than 3 milliseconds per image. Sometimes, a recommended material is close to the one envisioned by the user that requires a bit of fine tuning. To this end, we embed our high-dimensional shader descriptors into an intuitive 2D latent space where exploration and adjustments can take place without any domain expertise. However, this isn't very useful without additional information because the user does not know which regions offer useful material models that are in line with their scores. One of our key observations is that this latent space technique can be combined with Gaussian process regression to provide an intuitive color coding of the expected preferences to help highlighting the regions that may be of interest. Furthermore, our convolutional neural network can also provide real-time predictions of these images. These predictions are close to indistinguishable from the real-render images and are generated in real-time. Beyond the preference map, this neural network also opens up the possibility of visualizing the expected similarity of these new materials to the one we seek to fine-tune. By combining the preference and similarity maps, we obtain a color coding that guides the user in this latent space towards materials that are both similar and have a high expected score. To accentuate the utility of our real-time variant generation technique, we show a practical case where one of the great materials is almost done but requires a slight reduction in vividity. This adjustment doesn't require any domain expertise or direct interaction with the material modeling system and can be done in real-time. In this example, we learn the concept of translucent materials from only a handful of high-scoring samples and generate a large amount of recommendations from the Learn Distribution. These recommendations can then be used to populate the scene with relevant materials. Here, we show the preference and similarity maps of the Learn Translucent Material Space and explore possible variants of an input material. These recommendations can be used for mass-scale material synthesis and the amount of variation can be tweaked to suit the user's artistic vision. After assigning the appropriate materials, displacements and other advanced effects can be easily added to these materials. We have also experimented with an extended, more expressive version of our shader that also includes procedural texture del Beatles and displacements. The following scenes were populated using the Material Learning and Recommendation and latent space embedding steps. We have proposed a system for mass-scale material synthesis that is able to rapidly recommend a broad range of new material models after learning the user preferences from a modest number of samples. Beyond this pipeline, we also explored powerful combinations of the three use-learning algorithms, thereby opening up the possibility of real-time photorealistic material visualization, exploration and fine-tuning in a 2D latent space. We believe this feature set offers a useful solution for rapid mass-scale material synthesis for novice and expert users alike and hope to see more exploratory works combining the advantages of multiple state-of-the-art learning algorithms in the future.
[{"start": 0.0, "end": 9.0, "text": " Creating high-quality photorealistic materials for light transport simulations typically includes direct hands-on interaction with a principal shader."}, {"start": 9.0, "end": 18.0, "text": " This means that the user has to tweak a large number of material properties by hand and has to wait for a new image of it to be rendered after each interaction."}, {"start": 18.0, "end": 25.0, "text": " This requires a fair bit of expertise and the best setups are often obtained through a lengthy trial and error process."}, {"start": 25.0, "end": 31.0, "text": " To enhance this workflow, we present a learning-based system for rapid mass-scale material synthesis."}, {"start": 31.0, "end": 37.0, "text": " First, the user is presented with a gallery of materials and the assigned scores are shown in the upper left."}, {"start": 37.0, "end": 41.0, "text": " Here, we learn the concept of glassy and transparent materials."}, {"start": 41.0, "end": 49.0, "text": " By learning on only a few tens of high-scoring samples, our system is able to recommend many new materials from the learn distributions."}, {"start": 49.0, "end": 56.0, "text": " The learning step typically takes a few seconds where the recommendations take negligible time and can be done on a mass scale."}, {"start": 56.0, "end": 61.0, "text": " Then, these recommendations can be used to populate a scene with materials."}, {"start": 61.0, "end": 71.0, "text": " Typically, each recommendation takes 40 to 60 seconds to render with global illumination, which is clearly unacceptable for real-world workflows, even for mid-size galleries."}, {"start": 71.0, "end": 83.0, "text": " In the next step, we propose a convolutional neural network that is able to predict images of these materials that are close to the ones generated via global illumination and takes less than 3 milliseconds per image."}, {"start": 83.0, "end": 90.0, "text": " Sometimes, a recommended material is close to the one envisioned by the user that requires a bit of fine tuning."}, {"start": 90.0, "end": 100.0, "text": " To this end, we embed our high-dimensional shader descriptors into an intuitive 2D latent space where exploration and adjustments can take place without any domain expertise."}, {"start": 100.0, "end": 110.0, "text": " However, this isn't very useful without additional information because the user does not know which regions offer useful material models that are in line with their scores."}, {"start": 110.0, "end": 123.0, "text": " One of our key observations is that this latent space technique can be combined with Gaussian process regression to provide an intuitive color coding of the expected preferences to help highlighting the regions that may be of interest."}, {"start": 123.0, "end": 129.0, "text": " Furthermore, our convolutional neural network can also provide real-time predictions of these images."}, {"start": 129.0, "end": 136.0, "text": " These predictions are close to indistinguishable from the real-render images and are generated in real-time."}, {"start": 143.0, "end": 153.0, "text": " Beyond the preference map, this neural network also opens up the possibility of visualizing the expected similarity of these new materials to the one we seek to fine-tune."}, {"start": 153.0, "end": 165.0, "text": " By combining the preference and similarity maps, we obtain a color coding that guides the user in this latent space towards materials that are both similar and have a high expected score."}, {"start": 169.0, "end": 180.0, "text": " To accentuate the utility of our real-time variant generation technique, we show a practical case where one of the great materials is almost done but requires a slight reduction in vividity."}, {"start": 180.0, "end": 187.0, "text": " This adjustment doesn't require any domain expertise or direct interaction with the material modeling system and can be done in real-time."}, {"start": 187.0, "end": 204.0, "text": " In this example, we learn the concept of translucent materials from only a handful of high-scoring samples and generate a large amount of recommendations from the Learn Distribution."}, {"start": 218.0, "end": 225.0, "text": " These recommendations can then be used to populate the scene with relevant materials."}, {"start": 225.0, "end": 246.0, "text": " Here, we show the preference and similarity maps of the Learn Translucent Material Space and explore possible variants of an input material."}, {"start": 246.0, "end": 257.0, "text": " These recommendations can be used for mass-scale material synthesis and the amount of variation can be tweaked to suit the user's artistic vision."}, {"start": 260.0, "end": 267.0, "text": " After assigning the appropriate materials, displacements and other advanced effects can be easily added to these materials."}, {"start": 267.0, "end": 278.0, "text": " We have also experimented with an extended, more expressive version of our shader that also includes procedural texture del Beatles and displacements."}, {"start": 279.0, "end": 286.0, "text": " The following scenes were populated using the Material Learning and Recommendation and latent space embedding steps."}, {"start": 286.0, "end": 298.0, "text": " We have proposed a system for mass-scale material synthesis that is able to rapidly recommend a broad range of new material models after learning the user preferences from a modest number of samples."}, {"start": 299.0, "end": 313.0, "text": " Beyond this pipeline, we also explored powerful combinations of the three use-learning algorithms, thereby opening up the possibility of real-time photorealistic material visualization, exploration and fine-tuning in a 2D latent space."}, {"start": 313.0, "end": 328.0, "text": " We believe this feature set offers a useful solution for rapid mass-scale material synthesis for novice and expert users alike and hope to see more exploratory works combining the advantages of multiple state-of-the-art learning algorithms in the future."}]
Two Minute Papers
https://www.youtube.com/watch?v=ni6P5KU3SDU
Evolving Generative Adversarial Networks | Two Minute Papers #242
The paper "Evolutionary Generative Adversarial Networks" is available here: https://arxiv.org/abs/1803.00657 Our Patreon page: https://www.patreon.com/TwoMinutePapers Recommended for you: Video game to reality conversion: https://www.youtube.com/watch?v=dqxqbvyOnMY We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-3100786/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher. With the recent ascendancy of Neural Network-based techniques, we have witnessed amazing algorithms that are able to take an image from a video game and translate it into reality and the other way around. Or they can also translate daytime images to their nighttime versions or change summer to winter and back. Some AIB's algorithms can also create near-photorealistic images from our sketches. So the first question is, how is this wizardry even possible? These techniques are implemented by using generative adversarial networks, GANs, in short. This is an architecture where two neural networks battle each other. The generator network is the artist who tries to create convincing, re-looking images. The discriminator network is the critic that tries to tell a fake image from a real one. The artist learns from the feedback of the critic and will improve itself to come up with better quality images, and in the meantime, the critic also develops a sharper eye for fake images. These two adversaries push each other until they both become adept at their tasks. However, the training of these GANs is fraught with difficulties. For instance, it is not guaranteed that this process converges to a point and therefore it matters a great deal when we stop training the networks. This makes reproducing some works very challenging and is generally not a desirable property of GANs. It is also possible that the generator starts focusing on a select set of inputs and refuses to generate anything else a phenomenon will refer to as mode collapse. So how could we possibly defeat these issues? This work presents a technique that mimics the steps of evolution in nature, evaluation, selection and variation. First this means that not one, but many generator networks are trained and only the ones that provide sufficient quality and diversity in their images will be preserved. We start with an initial population of generator networks and evaluate the fitness of each of them. The better and more diverse images they produce, the more fit they are, the more fit they are, the more likely they are to survive the selection step where we eliminate the most unfit candidates. Okay, so now we see how a subset of these networks become the victim of evolution. This is how networks get eaten, if you will. But how do we produce new ones? And this is how we arrive to the variation step where new generator networks are created by introducing variations to the networks that are still alive in this environment. This simulates the creation of an offspring and will provide the next set of candidates for the next selection step and we hope that if we play this game over a long time, we get more and more resilient offspring. The resulting algorithm can be trained in a more stable way and it can create new bedroom images when being shown a database of bedrooms. When compared to the state of the art, we see that this evolutionary approach offers high quality images and more diversity in the outputs. It can also generate new human faces that are quite decent. They are clearly not perfect, but a technique that can pull this off consistently will be an excellent baseline for newer and better research works in the near future. We are also getting very close to an era where we can generate thousands of convincing digital characters from scratch to name just one application. What a time to be alive. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher."}, {"start": 4.28, "end": 9.52, "text": " With the recent ascendancy of Neural Network-based techniques, we have witnessed amazing algorithms"}, {"start": 9.52, "end": 15.200000000000001, "text": " that are able to take an image from a video game and translate it into reality and the"}, {"start": 15.200000000000001, "end": 16.6, "text": " other way around."}, {"start": 16.6, "end": 22.16, "text": " Or they can also translate daytime images to their nighttime versions or change summer"}, {"start": 22.16, "end": 24.8, "text": " to winter and back."}, {"start": 24.8, "end": 30.240000000000002, "text": " Some AIB's algorithms can also create near-photorealistic images from our sketches."}, {"start": 30.240000000000002, "end": 34.480000000000004, "text": " So the first question is, how is this wizardry even possible?"}, {"start": 34.480000000000004, "end": 40.120000000000005, "text": " These techniques are implemented by using generative adversarial networks, GANs, in short."}, {"start": 40.120000000000005, "end": 43.760000000000005, "text": " This is an architecture where two neural networks battle each other."}, {"start": 43.760000000000005, "end": 48.6, "text": " The generator network is the artist who tries to create convincing, re-looking images."}, {"start": 48.6, "end": 53.68, "text": " The discriminator network is the critic that tries to tell a fake image from a real one."}, {"start": 53.68, "end": 58.12, "text": " The artist learns from the feedback of the critic and will improve itself to come up with"}, {"start": 58.12, "end": 63.480000000000004, "text": " better quality images, and in the meantime, the critic also develops a sharper eye for"}, {"start": 63.480000000000004, "end": 64.92, "text": " fake images."}, {"start": 64.92, "end": 70.0, "text": " These two adversaries push each other until they both become adept at their tasks."}, {"start": 70.0, "end": 74.0, "text": " However, the training of these GANs is fraught with difficulties."}, {"start": 74.0, "end": 78.84, "text": " For instance, it is not guaranteed that this process converges to a point and therefore"}, {"start": 78.84, "end": 82.84, "text": " it matters a great deal when we stop training the networks."}, {"start": 82.84, "end": 88.16, "text": " This makes reproducing some works very challenging and is generally not a desirable property"}, {"start": 88.16, "end": 89.36, "text": " of GANs."}, {"start": 89.36, "end": 94.76, "text": " It is also possible that the generator starts focusing on a select set of inputs and refuses"}, {"start": 94.76, "end": 99.84, "text": " to generate anything else a phenomenon will refer to as mode collapse."}, {"start": 99.84, "end": 102.64, "text": " So how could we possibly defeat these issues?"}, {"start": 102.64, "end": 108.2, "text": " This work presents a technique that mimics the steps of evolution in nature, evaluation,"}, {"start": 108.2, "end": 110.52000000000001, "text": " selection and variation."}, {"start": 110.52, "end": 115.6, "text": " First this means that not one, but many generator networks are trained and only the ones that"}, {"start": 115.6, "end": 120.24, "text": " provide sufficient quality and diversity in their images will be preserved."}, {"start": 120.24, "end": 125.28, "text": " We start with an initial population of generator networks and evaluate the fitness of each"}, {"start": 125.28, "end": 126.28, "text": " of them."}, {"start": 126.28, "end": 130.76, "text": " The better and more diverse images they produce, the more fit they are, the more fit they"}, {"start": 130.76, "end": 135.2, "text": " are, the more likely they are to survive the selection step where we eliminate the most"}, {"start": 135.2, "end": 136.72, "text": " unfit candidates."}, {"start": 136.72, "end": 142.28, "text": " Okay, so now we see how a subset of these networks become the victim of evolution."}, {"start": 142.28, "end": 144.92, "text": " This is how networks get eaten, if you will."}, {"start": 144.92, "end": 146.8, "text": " But how do we produce new ones?"}, {"start": 146.8, "end": 151.52, "text": " And this is how we arrive to the variation step where new generator networks are created"}, {"start": 151.52, "end": 156.36, "text": " by introducing variations to the networks that are still alive in this environment."}, {"start": 156.36, "end": 160.56, "text": " This simulates the creation of an offspring and will provide the next set of candidates"}, {"start": 160.56, "end": 165.64, "text": " for the next selection step and we hope that if we play this game over a long time, we"}, {"start": 165.64, "end": 167.92, "text": " get more and more resilient offspring."}, {"start": 167.92, "end": 172.72, "text": " The resulting algorithm can be trained in a more stable way and it can create new bedroom"}, {"start": 172.72, "end": 175.88, "text": " images when being shown a database of bedrooms."}, {"start": 175.88, "end": 180.39999999999998, "text": " When compared to the state of the art, we see that this evolutionary approach offers high"}, {"start": 180.39999999999998, "end": 183.95999999999998, "text": " quality images and more diversity in the outputs."}, {"start": 183.95999999999998, "end": 187.76, "text": " It can also generate new human faces that are quite decent."}, {"start": 187.76, "end": 192.27999999999997, "text": " They are clearly not perfect, but a technique that can pull this off consistently will be"}, {"start": 192.28, "end": 196.84, "text": " an excellent baseline for newer and better research works in the near future."}, {"start": 196.84, "end": 201.88, "text": " We are also getting very close to an era where we can generate thousands of convincing digital"}, {"start": 201.88, "end": 205.68, "text": " characters from scratch to name just one application."}, {"start": 205.68, "end": 207.04, "text": " What a time to be alive."}, {"start": 207.04, "end": 237.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=AbxPbfODGcs
This Fools Your Vision | Two Minute Papers #241
The paper "Adversarial Examples that Fool both Human and Computer Vision" is available here: https://arxiv.org/abs/1802.08195 Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-2479948/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Kato Ysola-Ifahir. Neural networks are amazing at recognizing objects when being shown an image, and in some cases, like traffic sign recognition, their performance can reach superhuman levels. But as we discussed in the previous episode, most of these networks have an interesting property where we can add small changes to an input photo and have the network misclassify it to something completely different. The super effective neural network can be reduced to something that is less accurate than a coin flip with a properly crafted adversarial attack. So of course, we may think that neural networks are much smaller and simpler than the human brain, and because of that, of course, we cannot perform such an adversarial attack on the human vision system. Right? Or is it possible that some of the properties of machine vision systems can be altered to fool the human vision? And now, hold on to your papers. I think you know what's coming. This algorithm performs an adversarial attack on you. This image depicts a cat. And this image depicts a dog? Surely it's a dog, right? Well, no. This is an image of the previous cat plus some carefully crafted noise that makes it look like a dog. This is such a peculiar effect. I am staring at it, and I know for a fact that this is not a dog. This is cat plus noise, but I cannot not see it as a dog. Wow, this is certainly something that you don't see every day. So let's look at what changes were made to the image. Clearly, the nose appears to be longer and thicker, so that's a dog-like feature. But it is of utmost importance that we don't overlook the fact that several cat-specific features still remain in the image, for instance, the whiskers are very cat-like. And despite that, we still see it as a dog. This is insanity. This technique works by performing an adversarial attack against an AI model and modifying the noise generator model to better match the human visual system. Of course, the noise we have to add depends on the architecture of the neural network, and by this, I mean the number of layers and the number of neurons within these layers and many other parameters. However, a key insight of the paper is that there are still features that are shared between most architectures. This means that if we create an attack that works against five different neural network architectures, it is highly likely that it will also work on an arbitrary sixth network that we haven't seen yet. And it turns out that some of these noise distributions are also useful against the human visual system. Make sure to have a look at the paper. I have found it to be an easy read, and quite frankly, I am stunned by the result. It is clear that machine learning research is progressing at a staggering pace, but I haven't expected this. I haven't expected this at all. If you are enjoying the series, please make sure to have a look at our Patreon page to pick up cool perks like watching these episodes in early access or getting your name displayed in the video description as a key supporter. Details are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Kato Ysola-Ifahir."}, {"start": 4.6000000000000005, "end": 9.76, "text": " Neural networks are amazing at recognizing objects when being shown an image, and in some"}, {"start": 9.76, "end": 15.08, "text": " cases, like traffic sign recognition, their performance can reach superhuman levels."}, {"start": 15.08, "end": 18.88, "text": " But as we discussed in the previous episode, most of these networks have an interesting"}, {"start": 18.88, "end": 23.96, "text": " property where we can add small changes to an input photo and have the network misclassify"}, {"start": 23.96, "end": 26.240000000000002, "text": " it to something completely different."}, {"start": 26.24, "end": 30.599999999999998, "text": " The super effective neural network can be reduced to something that is less accurate than"}, {"start": 30.599999999999998, "end": 34.16, "text": " a coin flip with a properly crafted adversarial attack."}, {"start": 34.16, "end": 39.0, "text": " So of course, we may think that neural networks are much smaller and simpler than the human"}, {"start": 39.0, "end": 44.04, "text": " brain, and because of that, of course, we cannot perform such an adversarial attack on"}, {"start": 44.04, "end": 45.8, "text": " the human vision system."}, {"start": 45.8, "end": 46.8, "text": " Right?"}, {"start": 46.8, "end": 52.16, "text": " Or is it possible that some of the properties of machine vision systems can be altered"}, {"start": 52.16, "end": 54.239999999999995, "text": " to fool the human vision?"}, {"start": 54.24, "end": 56.32, "text": " And now, hold on to your papers."}, {"start": 56.32, "end": 58.2, "text": " I think you know what's coming."}, {"start": 58.2, "end": 62.44, "text": " This algorithm performs an adversarial attack on you."}, {"start": 62.44, "end": 64.68, "text": " This image depicts a cat."}, {"start": 64.68, "end": 67.32000000000001, "text": " And this image depicts a dog?"}, {"start": 67.32000000000001, "end": 69.56, "text": " Surely it's a dog, right?"}, {"start": 69.56, "end": 71.12, "text": " Well, no."}, {"start": 71.12, "end": 76.16, "text": " This is an image of the previous cat plus some carefully crafted noise that makes it look"}, {"start": 76.16, "end": 77.44, "text": " like a dog."}, {"start": 77.44, "end": 79.64, "text": " This is such a peculiar effect."}, {"start": 79.64, "end": 83.72, "text": " I am staring at it, and I know for a fact that this is not a dog."}, {"start": 83.72, "end": 88.0, "text": " This is cat plus noise, but I cannot not see it as a dog."}, {"start": 88.0, "end": 91.84, "text": " Wow, this is certainly something that you don't see every day."}, {"start": 91.84, "end": 94.24, "text": " So let's look at what changes were made to the image."}, {"start": 94.24, "end": 99.68, "text": " Clearly, the nose appears to be longer and thicker, so that's a dog-like feature."}, {"start": 99.68, "end": 104.32, "text": " But it is of utmost importance that we don't overlook the fact that several cat-specific"}, {"start": 104.32, "end": 109.48, "text": " features still remain in the image, for instance, the whiskers are very cat-like."}, {"start": 109.48, "end": 112.72, "text": " And despite that, we still see it as a dog."}, {"start": 112.72, "end": 114.4, "text": " This is insanity."}, {"start": 114.4, "end": 119.92, "text": " This technique works by performing an adversarial attack against an AI model and modifying the"}, {"start": 119.92, "end": 123.96, "text": " noise generator model to better match the human visual system."}, {"start": 123.96, "end": 128.32, "text": " Of course, the noise we have to add depends on the architecture of the neural network,"}, {"start": 128.32, "end": 132.8, "text": " and by this, I mean the number of layers and the number of neurons within these layers"}, {"start": 132.8, "end": 134.32, "text": " and many other parameters."}, {"start": 134.32, "end": 138.96, "text": " However, a key insight of the paper is that there are still features that are shared"}, {"start": 138.96, "end": 140.76, "text": " between most architectures."}, {"start": 140.76, "end": 144.88, "text": " This means that if we create an attack that works against five different neural network"}, {"start": 144.88, "end": 150.32, "text": " architectures, it is highly likely that it will also work on an arbitrary sixth network"}, {"start": 150.32, "end": 152.2, "text": " that we haven't seen yet."}, {"start": 152.2, "end": 156.88, "text": " And it turns out that some of these noise distributions are also useful against the human"}, {"start": 156.88, "end": 158.07999999999998, "text": " visual system."}, {"start": 158.07999999999998, "end": 159.39999999999998, "text": " Make sure to have a look at the paper."}, {"start": 159.39999999999998, "end": 164.39999999999998, "text": " I have found it to be an easy read, and quite frankly, I am stunned by the result."}, {"start": 164.39999999999998, "end": 168.79999999999998, "text": " It is clear that machine learning research is progressing at a staggering pace, but I"}, {"start": 168.79999999999998, "end": 170.28, "text": " haven't expected this."}, {"start": 170.28, "end": 172.2, "text": " I haven't expected this at all."}, {"start": 172.2, "end": 175.96, "text": " If you are enjoying the series, please make sure to have a look at our Patreon page to"}, {"start": 175.96, "end": 181.28, "text": " pick up cool perks like watching these episodes in early access or getting your name displayed"}, {"start": 181.28, "end": 183.92000000000002, "text": " in the video description as a key supporter."}, {"start": 183.92000000000002, "end": 186.16, "text": " Details are available in the video description."}, {"start": 186.16, "end": 203.32, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SA4YEAWVpbk
One Pixel Attack Defeats Neural Networks | Two Minute Papers #240
The paper "One pixel attack for fooling deep neural networks" is available here: https://arxiv.org/abs/1710.08864 This seems like an unofficial implementation: https://github.com/Hyperparticle/one-pixel-attack-keras Differential evolution animation credit: https://pablormier.github.io/2017/09/05/a-tutorial-on-differential-evolution-with-python/ Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Malek Cellier, Frank Goertzen, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-3010129/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir. We had many episodes about new wondrous AI-related algorithms, but today we are going to talk about AI safety which is an increasingly important field of AI research. Deep neural networks are excellent classifiers, which means that after we train them on a large amount of data, they will be remarkably accurate at image recognition. So generally, accuracy is subject to maximization. But no one said a word about robustness, and here is where these new neural network defeating techniques come into play. Earlier we have shown that we can fool neural networks by adding carefully crafted noise to an image. If done well, this noise is barely perceptible and can fool the classifier into looking at a bus and thinking that it is an ostrich. We often refer to this as an adversarial attack on a neural network. This is one way of doing it, but note that we have to change many, many pixels of the image to perform such an attack. So the next question is clear. What is the lowest number of pixel changes that we have to perform to fool a neural network? What is the magic number? One would think that a reasonable number would at least be a hundred. Hold onto your papers because this paper shows that many neural networks can be defeated by only changing one pixel. By changing only one pixel in an image that depicts a horse, the AI will be 99.9% sure that we are seeing a frog. A ship can also be disguised as a car, or, amusingly, almost anything can be seen as an airplane. So how can we perform such an attack? As you can see here, these neural networks typically don't provide a class directly, but a bunch of confidence values. What does this mean exactly? The confidence values denote how sure the network is that we see a Labrador or a Tiger Cat. To come to a decision, we usually look at all of these confidence values and choose the object type that has the highest confidence. Now clearly, we have to know which pixel position to choose and what color it should be to perform a successful attack. We can do this by performing a bunch of random changes to the image and checking how each of these changes performed in decreasing the confidence of the network in the appropriate class. After this, we filter out the bad ones and continue our search around the most promising candidates. This process will refer to as differential evolution, and if we perform it properly, in the end, the confidence value for the correct class will be so low that a different class will take over. If this happens, the network has been defeated. Now note that this also means that we have to be able to look into the neural network and have access to the confidence values. There is also plenty of research works on training more robust neural networks that can withstand as many adversarial changes to the inputs as possible. I cannot wait to report on these works as well in the future. Also, our next episode is going to be on adversarial attacks on the human vision system. Can you believe that? That paper is absolutely insane, so make sure to subscribe and hit the bell icon to get notified. You don't want to miss that one. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir."}, {"start": 4.36, "end": 10.14, "text": " We had many episodes about new wondrous AI-related algorithms, but today we are going to talk about"}, {"start": 10.14, "end": 14.72, "text": " AI safety which is an increasingly important field of AI research."}, {"start": 14.72, "end": 19.46, "text": " Deep neural networks are excellent classifiers, which means that after we train them on a large"}, {"start": 19.46, "end": 23.96, "text": " amount of data, they will be remarkably accurate at image recognition."}, {"start": 23.96, "end": 27.64, "text": " So generally, accuracy is subject to maximization."}, {"start": 27.64, "end": 32.28, "text": " But no one said a word about robustness, and here is where these new neural network"}, {"start": 32.28, "end": 34.72, "text": " defeating techniques come into play."}, {"start": 34.72, "end": 39.08, "text": " Earlier we have shown that we can fool neural networks by adding carefully crafted noise"}, {"start": 39.08, "end": 40.08, "text": " to an image."}, {"start": 40.08, "end": 44.96, "text": " If done well, this noise is barely perceptible and can fool the classifier into looking"}, {"start": 44.96, "end": 48.8, "text": " at a bus and thinking that it is an ostrich."}, {"start": 48.8, "end": 52.88, "text": " We often refer to this as an adversarial attack on a neural network."}, {"start": 52.88, "end": 57.92, "text": " This is one way of doing it, but note that we have to change many, many pixels of the image"}, {"start": 57.92, "end": 59.72, "text": " to perform such an attack."}, {"start": 59.72, "end": 61.64, "text": " So the next question is clear."}, {"start": 61.64, "end": 67.44, "text": " What is the lowest number of pixel changes that we have to perform to fool a neural network?"}, {"start": 67.44, "end": 69.16, "text": " What is the magic number?"}, {"start": 69.16, "end": 73.08, "text": " One would think that a reasonable number would at least be a hundred."}, {"start": 73.08, "end": 77.6, "text": " Hold onto your papers because this paper shows that many neural networks can be defeated"}, {"start": 77.6, "end": 80.92, "text": " by only changing one pixel."}, {"start": 80.92, "end": 87.48, "text": " By changing only one pixel in an image that depicts a horse, the AI will be 99.9% sure that"}, {"start": 87.48, "end": 89.52, "text": " we are seeing a frog."}, {"start": 89.52, "end": 95.48, "text": " A ship can also be disguised as a car, or, amusingly, almost anything can be seen as"}, {"start": 95.48, "end": 96.72, "text": " an airplane."}, {"start": 96.72, "end": 99.4, "text": " So how can we perform such an attack?"}, {"start": 99.4, "end": 103.76, "text": " As you can see here, these neural networks typically don't provide a class directly,"}, {"start": 103.76, "end": 106.08, "text": " but a bunch of confidence values."}, {"start": 106.08, "end": 107.64, "text": " What does this mean exactly?"}, {"start": 107.64, "end": 113.08, "text": " The confidence values denote how sure the network is that we see a Labrador or a Tiger"}, {"start": 113.08, "end": 114.08, "text": " Cat."}, {"start": 114.08, "end": 118.08, "text": " To come to a decision, we usually look at all of these confidence values and choose the"}, {"start": 118.08, "end": 121.12, "text": " object type that has the highest confidence."}, {"start": 121.12, "end": 126.08, "text": " Now clearly, we have to know which pixel position to choose and what color it should be"}, {"start": 126.08, "end": 128.2, "text": " to perform a successful attack."}, {"start": 128.2, "end": 132.68, "text": " We can do this by performing a bunch of random changes to the image and checking how"}, {"start": 132.68, "end": 137.4, "text": " each of these changes performed in decreasing the confidence of the network in the appropriate"}, {"start": 137.4, "end": 138.56, "text": " class."}, {"start": 138.56, "end": 142.96, "text": " After this, we filter out the bad ones and continue our search around the most promising"}, {"start": 142.96, "end": 144.12, "text": " candidates."}, {"start": 144.12, "end": 148.92000000000002, "text": " This process will refer to as differential evolution, and if we perform it properly,"}, {"start": 148.92000000000002, "end": 153.72, "text": " in the end, the confidence value for the correct class will be so low that a different class"}, {"start": 153.72, "end": 155.08, "text": " will take over."}, {"start": 155.08, "end": 158.0, "text": " If this happens, the network has been defeated."}, {"start": 158.0, "end": 162.08, "text": " Now note that this also means that we have to be able to look into the neural network"}, {"start": 162.08, "end": 164.72, "text": " and have access to the confidence values."}, {"start": 164.72, "end": 168.72, "text": " There is also plenty of research works on training more robust neural networks that can"}, {"start": 168.72, "end": 172.84, "text": " withstand as many adversarial changes to the inputs as possible."}, {"start": 172.84, "end": 176.04, "text": " I cannot wait to report on these works as well in the future."}, {"start": 176.04, "end": 181.52, "text": " Also, our next episode is going to be on adversarial attacks on the human vision system."}, {"start": 181.52, "end": 182.72, "text": " Can you believe that?"}, {"start": 182.72, "end": 187.64, "text": " That paper is absolutely insane, so make sure to subscribe and hit the bell icon to get"}, {"start": 187.64, "end": 188.64, "text": " notified."}, {"start": 188.64, "end": 189.84, "text": " You don't want to miss that one."}, {"start": 189.84, "end": 195.52, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=veWkBsK0nwU
DeepMind's AI Learns Complex Behaviors From Scratch | Two Minute Papers #239
The paper "Learning by Playing - Solving Sparse Reward Tasks from Scratch" is available here: https://arxiv.org/abs/1802.10567 Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-2009819/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Reinforcement learning is a learning algorithm that chooses a set of actions in an environment to maximize the score. This class of techniques enables us to train an AI to master a large variety of video games and has many more cool applications. Reinforcement learning typically works well when the rewards are dense. What does this mean exactly? This means that if we play a game and after making a mistake we immediately die, it is easy to identify which action of ours was the mistake. However, if the rewards are sparse, we are likely playing something that is akin to a long-term strategy planner game. If we lost, it is possible that we were outmaneuvered in the final battle, but it is also possible that we lost the game way earlier due to building the wrong kind of economy. There are a million other possible reasons because we get feedback on how well we have done only once and much, much after we have chosen our actions. Learning from sparse rewards is very challenging, even for humans. And it gets even worse. In this problem formulation, we don't have any teachers that guide the learning of the algorithm and no prior knowledge of the environment. So this problem sounds almost impossible to solve. So what did DeepMind scientists come up with to at least have a chance of approaching it? And now, hold on to your papers because this algorithm learns like a baby learns about its environment. This means that before we start solving problems, the algorithm would be unleashed into the environment to experiment and master basic tasks. In this case, our final goal would be to tidy up the table. First, the algorithm learns to activate its haptic sensors, control the joints and fingers, then it learns to grab an object and then to stack objects on top of each other. And in the end, the robot will learn that tidying up is nothing else but a sequence of these elementary actions that it had already mastered. The algorithm also has an internal scheduler that decides which should be the next action to master while keeping in mind that the goal is to maximize progress on the main task. Which is tidying up the table in this case. And now, on to validation. When we are talking about software projects, the question of real life viability often emerges. So the question is how would this technique work in reality and what else would be the ultimate test than running it on a real robot arm? Let's look here and marvel at the fact that it easily finds and moves the green block to the appropriate spot. And note that it had learned how to do it from scratch much like a baby would learn to perform such tasks. And also note that this was a software project that was deployed on this robot arm, which means that the algorithm generalizes well for different control mechanisms. A property that is highly sought after when talking about intelligence. And if earlier progress in machine learning research is indicative of the future, this may learn how to perform backflips and play video games on a super human level. And I will be here to report on that for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.0, "end": 11.0, "text": " Reinforcement learning is a learning algorithm that chooses a set of actions in an environment to maximize the score."}, {"start": 11.0, "end": 18.0, "text": " This class of techniques enables us to train an AI to master a large variety of video games and has many more cool applications."}, {"start": 18.0, "end": 22.0, "text": " Reinforcement learning typically works well when the rewards are dense."}, {"start": 22.0, "end": 24.0, "text": " What does this mean exactly?"}, {"start": 24.0, "end": 33.0, "text": " This means that if we play a game and after making a mistake we immediately die, it is easy to identify which action of ours was the mistake."}, {"start": 33.0, "end": 40.0, "text": " However, if the rewards are sparse, we are likely playing something that is akin to a long-term strategy planner game."}, {"start": 40.0, "end": 50.0, "text": " If we lost, it is possible that we were outmaneuvered in the final battle, but it is also possible that we lost the game way earlier due to building the wrong kind of economy."}, {"start": 50.0, "end": 58.0, "text": " There are a million other possible reasons because we get feedback on how well we have done only once and much, much after we have chosen our actions."}, {"start": 58.0, "end": 62.0, "text": " Learning from sparse rewards is very challenging, even for humans."}, {"start": 62.0, "end": 64.0, "text": " And it gets even worse."}, {"start": 64.0, "end": 71.0, "text": " In this problem formulation, we don't have any teachers that guide the learning of the algorithm and no prior knowledge of the environment."}, {"start": 71.0, "end": 75.0, "text": " So this problem sounds almost impossible to solve."}, {"start": 75.0, "end": 80.0, "text": " So what did DeepMind scientists come up with to at least have a chance of approaching it?"}, {"start": 80.0, "end": 86.0, "text": " And now, hold on to your papers because this algorithm learns like a baby learns about its environment."}, {"start": 86.0, "end": 94.0, "text": " This means that before we start solving problems, the algorithm would be unleashed into the environment to experiment and master basic tasks."}, {"start": 94.0, "end": 98.0, "text": " In this case, our final goal would be to tidy up the table."}, {"start": 98.0, "end": 109.0, "text": " First, the algorithm learns to activate its haptic sensors, control the joints and fingers, then it learns to grab an object and then to stack objects on top of each other."}, {"start": 109.0, "end": 117.0, "text": " And in the end, the robot will learn that tidying up is nothing else but a sequence of these elementary actions that it had already mastered."}, {"start": 117.0, "end": 127.0, "text": " The algorithm also has an internal scheduler that decides which should be the next action to master while keeping in mind that the goal is to maximize progress on the main task."}, {"start": 127.0, "end": 131.0, "text": " Which is tidying up the table in this case."}, {"start": 131.0, "end": 133.0, "text": " And now, on to validation."}, {"start": 133.0, "end": 139.0, "text": " When we are talking about software projects, the question of real life viability often emerges."}, {"start": 139.0, "end": 147.0, "text": " So the question is how would this technique work in reality and what else would be the ultimate test than running it on a real robot arm?"}, {"start": 147.0, "end": 153.0, "text": " Let's look here and marvel at the fact that it easily finds and moves the green block to the appropriate spot."}, {"start": 153.0, "end": 159.0, "text": " And note that it had learned how to do it from scratch much like a baby would learn to perform such tasks."}, {"start": 159.0, "end": 169.0, "text": " And also note that this was a software project that was deployed on this robot arm, which means that the algorithm generalizes well for different control mechanisms."}, {"start": 169.0, "end": 173.0, "text": " A property that is highly sought after when talking about intelligence."}, {"start": 173.0, "end": 182.0, "text": " And if earlier progress in machine learning research is indicative of the future, this may learn how to perform backflips and play video games on a super human level."}, {"start": 182.0, "end": 184.0, "text": " And I will be here to report on that for you."}, {"start": 184.0, "end": 212.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=oWpp1YYcCsU
DeepMind's AI Masters Even More Atari Games | Two Minute Papers #238
The paper "IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures" is available here: https://arxiv.org/abs/1802.01561 Update: Its source code is now available here: https://github.com/deepmind/scalable_agent DeepMind Lab: https://arxiv.org/abs/1612.03801 Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-1548365/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. Reinforcement learning is a learning algorithm that we can use to choose a set of actions in an environment to maximize a score. There are many applications of such learners, but we typically cite video games because of the diverse set of challenges they can present the player with. And in reinforcement learning, we typically have one task, like learning backflips and one agent that we wish to train to perform it well. This work is deep minds attempt to supercharge reinforcement learning by training one agent that can do a much wider variety of tasks. Now, this clearly means that we have to acquire more training data and also be prepared to process all this data as effectively as possible. By the way, the test suite that you see here is also new where typical tasks in this environment involve pathfinding through mazes, collecting objects, finding keys to open their matching doors, and more. And every fellow scholar knows that the paper describing its details is of course available in the video description. This new technique builds upon an earlier architecture that was also published by DeepMind. This earlier architecture A3C unleashes a bunch of actors into the wilderness, each of which gets a copy of the playbook that contains the current strategy. These actors then play the game independently and periodically stop and share what worked and what didn't to this playbook. With this new Impala architecture, there are two key changes to this. One, in the middle, we have a learner and the actors don't share what worked and what didn't to this learner, but they share their experiences instead. And later, the centralized learner will come up with the proper conclusions with all this data. Imagine if each football player in a team tries to tell the coach the things they tried on the field and what worked. That is surely going to work at least okay, but instead of these conclusions, we could aggregate all the experience of the players into some sort of centralized hive mind and get access to a lot more and higher quality information. Maybe we will see that a strategy only works well if executed by the players who are known to be faster than their opponents on the field. The other key difference is that with traditional reinforcement learning, we play for a given number of steps, then stop and perform learning. With this technique, we have decoupled the playing and learning, therefore it is possible to create an algorithm that performs both of them continuously. This also raises new questions, make sure to have a look at the paper, specifically the part with the new off-policy correction method by the name VTrace. When tested on 30 of these different levels and a bunch of Atari games, the new technique was typically able to double the score of the previous E3C architecture, which was also really good. And at the same time, this is at least 10 times more data efficient and its knowledge generalizes better to other tasks. We have had many episodes on neural network based techniques, but as you can see, research on the reinforcement learning side is also progressing at a remarkable pace. If you have enjoyed this episode and you feel that 8 science videos a month is worth a dollar, please consider supporting us on Patreon. You can also pick up cool perks like early access to these episodes. The link is available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.5, "end": 9.08, "text": " Reinforcement learning is a learning algorithm that we can use to choose a set of actions"}, {"start": 9.08, "end": 11.620000000000001, "text": " in an environment to maximize a score."}, {"start": 11.620000000000001, "end": 15.8, "text": " There are many applications of such learners, but we typically cite video games because"}, {"start": 15.8, "end": 19.22, "text": " of the diverse set of challenges they can present the player with."}, {"start": 19.22, "end": 24.1, "text": " And in reinforcement learning, we typically have one task, like learning backflips and"}, {"start": 24.1, "end": 27.46, "text": " one agent that we wish to train to perform it well."}, {"start": 27.46, "end": 32.82, "text": " This work is deep minds attempt to supercharge reinforcement learning by training one agent"}, {"start": 32.82, "end": 35.86, "text": " that can do a much wider variety of tasks."}, {"start": 35.86, "end": 40.94, "text": " Now, this clearly means that we have to acquire more training data and also be prepared to"}, {"start": 40.94, "end": 44.620000000000005, "text": " process all this data as effectively as possible."}, {"start": 44.620000000000005, "end": 49.540000000000006, "text": " By the way, the test suite that you see here is also new where typical tasks in this environment"}, {"start": 49.540000000000006, "end": 55.14, "text": " involve pathfinding through mazes, collecting objects, finding keys to open their matching"}, {"start": 55.14, "end": 56.78, "text": " doors, and more."}, {"start": 56.78, "end": 61.94, "text": " And every fellow scholar knows that the paper describing its details is of course available"}, {"start": 61.94, "end": 63.22, "text": " in the video description."}, {"start": 63.22, "end": 68.06, "text": " This new technique builds upon an earlier architecture that was also published by DeepMind."}, {"start": 68.06, "end": 74.34, "text": " This earlier architecture A3C unleashes a bunch of actors into the wilderness, each of which"}, {"start": 74.34, "end": 78.74000000000001, "text": " gets a copy of the playbook that contains the current strategy."}, {"start": 78.74000000000001, "end": 84.42, "text": " These actors then play the game independently and periodically stop and share what worked"}, {"start": 84.42, "end": 86.62, "text": " and what didn't to this playbook."}, {"start": 86.62, "end": 91.02000000000001, "text": " With this new Impala architecture, there are two key changes to this."}, {"start": 91.02000000000001, "end": 95.58000000000001, "text": " One, in the middle, we have a learner and the actors don't share what worked and what"}, {"start": 95.58000000000001, "end": 99.46000000000001, "text": " didn't to this learner, but they share their experiences instead."}, {"start": 99.46000000000001, "end": 103.86000000000001, "text": " And later, the centralized learner will come up with the proper conclusions with all this"}, {"start": 103.86000000000001, "end": 104.86000000000001, "text": " data."}, {"start": 104.86000000000001, "end": 109.06, "text": " Imagine if each football player in a team tries to tell the coach the things they tried"}, {"start": 109.06, "end": 111.14, "text": " on the field and what worked."}, {"start": 111.14, "end": 115.54, "text": " That is surely going to work at least okay, but instead of these conclusions, we could"}, {"start": 115.54, "end": 120.66000000000001, "text": " aggregate all the experience of the players into some sort of centralized hive mind and"}, {"start": 120.66000000000001, "end": 124.9, "text": " get access to a lot more and higher quality information."}, {"start": 124.9, "end": 129.9, "text": " Maybe we will see that a strategy only works well if executed by the players who are known"}, {"start": 129.9, "end": 132.82, "text": " to be faster than their opponents on the field."}, {"start": 132.82, "end": 137.02, "text": " The other key difference is that with traditional reinforcement learning, we play for a given"}, {"start": 137.02, "end": 140.58, "text": " number of steps, then stop and perform learning."}, {"start": 140.58, "end": 145.38, "text": " With this technique, we have decoupled the playing and learning, therefore it is possible"}, {"start": 145.38, "end": 149.62, "text": " to create an algorithm that performs both of them continuously."}, {"start": 149.62, "end": 153.78, "text": " This also raises new questions, make sure to have a look at the paper, specifically the"}, {"start": 153.78, "end": 158.46, "text": " part with the new off-policy correction method by the name VTrace."}, {"start": 158.46, "end": 163.14, "text": " When tested on 30 of these different levels and a bunch of Atari games, the new technique"}, {"start": 163.14, "end": 168.78, "text": " was typically able to double the score of the previous E3C architecture, which was also"}, {"start": 168.78, "end": 169.85999999999999, "text": " really good."}, {"start": 169.85999999999999, "end": 175.34, "text": " And at the same time, this is at least 10 times more data efficient and its knowledge generalizes"}, {"start": 175.34, "end": 177.5, "text": " better to other tasks."}, {"start": 177.5, "end": 182.26, "text": " We have had many episodes on neural network based techniques, but as you can see, research"}, {"start": 182.26, "end": 186.74, "text": " on the reinforcement learning side is also progressing at a remarkable pace."}, {"start": 186.74, "end": 191.78, "text": " If you have enjoyed this episode and you feel that 8 science videos a month is worth a dollar,"}, {"start": 191.78, "end": 194.18, "text": " please consider supporting us on Patreon."}, {"start": 194.18, "end": 197.9, "text": " You can also pick up cool perks like early access to these episodes."}, {"start": 197.9, "end": 200.06, "text": " The link is available in the video description."}, {"start": 200.06, "end": 205.7, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dxOHmvTaCN4
AI Learns Human Pose Estimation From Videos | Two Minute Papers #237
The paper "DensePose: Dense Human Pose Estimation In The Wild" is available here: https://arxiv.org/abs/1802.00434 http://densepose.org/ Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-3178198/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. This project is a collaboration between Inria and Facebook AI Research and is about pose estimation. Pose estimation means that we take an input photo or, in the cooler case, video of people and the output should be a description of their postures. This is kind of like motion capture for those amazing movie and computer game animations, but without the studio and the markers. This work goes even further and tries to offer a full 3D reconstruction of the geometry of the bodies and it is in fact doing way more than that as you will see in a minute. Neural networks are usually great at these tasks provided that we have a large number of training samples to train them. So, the first step is gathering a large amount of annotated data. This means an input photograph of someone which is paired up with the correct description of their posture. This is what we call one training sample. This new proposed dataset contains 50,000 of these training samples and using that we can proceed to step number 2 training the neural network to perform pose estimation. But, there is more to this particular work. Normally, this pose estimation takes place with a 2D skeleton which means that most techniques output the stick figure. But not in this case because the dataset contains segmentations and dense correspondences between 2D images and 3D models, therefore the network is also able to output fully 3D models. There are plenty of interesting details shown in the paper. For instance, since the annotated Grand Truth footage in the training set is created by humans, there is plenty of missing data that is filled in by using a separate neural network that is specialized for this task. Make sure to have a look at the paper for more cool details like this. This all sounds good in theory, but a practical application has to be robust against occlusions and rapid changes in posture. And the good thing is that the authors published plenty of examples with these that you can see here. Also, it has to be able to deal with smaller and bigger scales when people are closer or further away from the camera. This is also a challenge. The algorithm does a really good job at this and remember no markers or studio setup is required and everything that you see here is performed interactively. The dataset will appear soon and it will be possible to reuse it for future research works, so I expect plenty of more collaboration and follow-up works for this problem. We are living amazing times indeed. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.5600000000000005, "end": 9.76, "text": " This project is a collaboration between Inria and Facebook AI Research and is about pose"}, {"start": 9.76, "end": 10.76, "text": " estimation."}, {"start": 10.76, "end": 16.96, "text": " Pose estimation means that we take an input photo or, in the cooler case, video of people"}, {"start": 16.96, "end": 20.240000000000002, "text": " and the output should be a description of their postures."}, {"start": 20.240000000000002, "end": 25.64, "text": " This is kind of like motion capture for those amazing movie and computer game animations,"}, {"start": 25.64, "end": 28.72, "text": " but without the studio and the markers."}, {"start": 28.72, "end": 33.96, "text": " This work goes even further and tries to offer a full 3D reconstruction of the geometry"}, {"start": 33.96, "end": 39.28, "text": " of the bodies and it is in fact doing way more than that as you will see in a minute."}, {"start": 39.28, "end": 43.28, "text": " Neural networks are usually great at these tasks provided that we have a large number of"}, {"start": 43.28, "end": 45.16, "text": " training samples to train them."}, {"start": 45.16, "end": 49.84, "text": " So, the first step is gathering a large amount of annotated data."}, {"start": 49.84, "end": 53.96, "text": " This means an input photograph of someone which is paired up with the correct description"}, {"start": 53.96, "end": 55.28, "text": " of their posture."}, {"start": 55.28, "end": 57.84, "text": " This is what we call one training sample."}, {"start": 57.84, "end": 63.400000000000006, "text": " This new proposed dataset contains 50,000 of these training samples and using that we"}, {"start": 63.400000000000006, "end": 68.4, "text": " can proceed to step number 2 training the neural network to perform pose estimation."}, {"start": 68.4, "end": 71.12, "text": " But, there is more to this particular work."}, {"start": 71.12, "end": 76.2, "text": " Normally, this pose estimation takes place with a 2D skeleton which means that most techniques"}, {"start": 76.2, "end": 78.12, "text": " output the stick figure."}, {"start": 78.12, "end": 83.0, "text": " But not in this case because the dataset contains segmentations and dense correspondences"}, {"start": 83.0, "end": 89.96, "text": " between 2D images and 3D models, therefore the network is also able to output fully 3D"}, {"start": 89.96, "end": 90.96, "text": " models."}, {"start": 90.96, "end": 94.08, "text": " There are plenty of interesting details shown in the paper."}, {"start": 94.08, "end": 98.44, "text": " For instance, since the annotated Grand Truth footage in the training set is created by"}, {"start": 98.44, "end": 104.36, "text": " humans, there is plenty of missing data that is filled in by using a separate neural network"}, {"start": 104.36, "end": 106.56, "text": " that is specialized for this task."}, {"start": 106.56, "end": 109.96000000000001, "text": " Make sure to have a look at the paper for more cool details like this."}, {"start": 109.96, "end": 115.36, "text": " This all sounds good in theory, but a practical application has to be robust against occlusions"}, {"start": 115.36, "end": 117.32, "text": " and rapid changes in posture."}, {"start": 117.32, "end": 121.63999999999999, "text": " And the good thing is that the authors published plenty of examples with these that you can"}, {"start": 121.63999999999999, "end": 122.63999999999999, "text": " see here."}, {"start": 122.63999999999999, "end": 127.6, "text": " Also, it has to be able to deal with smaller and bigger scales when people are closer"}, {"start": 127.6, "end": 129.56, "text": " or further away from the camera."}, {"start": 129.56, "end": 131.2, "text": " This is also a challenge."}, {"start": 131.2, "end": 136.32, "text": " The algorithm does a really good job at this and remember no markers or studio setup is"}, {"start": 136.32, "end": 141.16, "text": " required and everything that you see here is performed interactively."}, {"start": 141.16, "end": 145.48, "text": " The dataset will appear soon and it will be possible to reuse it for future research"}, {"start": 145.48, "end": 150.68, "text": " works, so I expect plenty of more collaboration and follow-up works for this problem."}, {"start": 150.68, "end": 152.88, "text": " We are living amazing times indeed."}, {"start": 152.88, "end": 173.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UPcR7S8ue1A
AI-Based Animoji Without The iPhone X | Two Minute Papers #236
The paper "Avatar Digitization From a Single Image For Real-Time Rendering" is available here: http://www.hao-li.com/publications/papers/siggraphAsia2017ADFSIFRTR.pdf http://www.hao-li.com/Hao_Li/Hao_Li_-_publications.html Demo for iOS: http://pinscreen.com/ Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Many of you have surely heard the word an emoji, which refers to these emoji figures that are animated in real time and react to our facial gestures. This is implemented in the new iPhone X phones, however, to accomplish this, it uses a dot projector to get a good enough understanding of the geometry of the human face. So how about a technique that doesn't need any specialized gear, takes not even a video of you but one photograph as an input and creates a digital avatar of us that can be animated in real time. Well, sign me up. Have a look at these incredible results. As you can see, the final result also includes secondary components like eyes, teeth, tongue, and gum. Now the avatars don't have to be fully photorealistic but have to capture the appearance and gestures of the user well enough so they can be used in video games or any telepresence application where a set of users interact in a virtual world. As opposed to many prior works, the hair is not reconstructed strand by strand because doing this in real time is not feasible. Also, note that the information we are given is highly incomplete because the backside of the head is not captured but these characters also have a quite appropriate looking hairstyle there. How is this even possible? Well, first the input image is segmented into the face part and the hair part. Then the hair part is run through a neural network that tries to extract attributes like length, spikiness, or their hair bands is their ponytail where the hairline is and more. This is an extremely deep neural network with over 50 layers and it took 40,000 images of different hair styles to train. Now since it is highly unlikely that the input photo shows someone with a hairstyle that was never ever worn by anyone else, we can look into a big data set of already existing hairstyles and choose the closest one that fits the attributes extracted by the neural network. Such a smart idea, loving it. You can see how well this works in practice and in the next step, the movement and the appearance of the final hair geometry can be computed in real time through a novel polygonal strip representation. The technique also supports retargeting, which means that our gestures can be transferred to different characters. The framework is also very robust to different lighting conditions, which means that a differently lead photograph will lead to very similar outputs. The same applies for expressions. This is one of those highly desirable details that makes or breaks the usability of a new technique in production environments and this one passed with flying colors. In these comparisons, you can also see that the quality of the results also smokes the competition. A variant of the technology can be downloaded through the link in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.84, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.84, "end": 9.76, "text": " Many of you have surely heard the word an emoji, which refers to these emoji figures that"}, {"start": 9.76, "end": 14.280000000000001, "text": " are animated in real time and react to our facial gestures."}, {"start": 14.280000000000001, "end": 19.56, "text": " This is implemented in the new iPhone X phones, however, to accomplish this, it uses a dot"}, {"start": 19.56, "end": 24.6, "text": " projector to get a good enough understanding of the geometry of the human face."}, {"start": 24.6, "end": 29.96, "text": " So how about a technique that doesn't need any specialized gear, takes not even a video"}, {"start": 29.96, "end": 35.84, "text": " of you but one photograph as an input and creates a digital avatar of us that can be animated"}, {"start": 35.84, "end": 37.44, "text": " in real time."}, {"start": 37.44, "end": 38.96, "text": " Well, sign me up."}, {"start": 38.96, "end": 41.24, "text": " Have a look at these incredible results."}, {"start": 41.24, "end": 47.96, "text": " As you can see, the final result also includes secondary components like eyes, teeth, tongue,"}, {"start": 47.96, "end": 48.96, "text": " and gum."}, {"start": 48.96, "end": 54.040000000000006, "text": " Now the avatars don't have to be fully photorealistic but have to capture the appearance"}, {"start": 54.04, "end": 59.96, "text": " and gestures of the user well enough so they can be used in video games or any telepresence"}, {"start": 59.96, "end": 64.16, "text": " application where a set of users interact in a virtual world."}, {"start": 64.16, "end": 68.94, "text": " As opposed to many prior works, the hair is not reconstructed strand by strand because"}, {"start": 68.94, "end": 71.56, "text": " doing this in real time is not feasible."}, {"start": 71.56, "end": 76.28, "text": " Also, note that the information we are given is highly incomplete because the backside"}, {"start": 76.28, "end": 81.6, "text": " of the head is not captured but these characters also have a quite appropriate looking hairstyle"}, {"start": 81.6, "end": 82.6, "text": " there."}, {"start": 82.6, "end": 84.67999999999999, "text": " How is this even possible?"}, {"start": 84.67999999999999, "end": 90.32, "text": " Well, first the input image is segmented into the face part and the hair part."}, {"start": 90.32, "end": 95.32, "text": " Then the hair part is run through a neural network that tries to extract attributes like"}, {"start": 95.32, "end": 102.56, "text": " length, spikiness, or their hair bands is their ponytail where the hairline is and more."}, {"start": 102.56, "end": 108.0, "text": " This is an extremely deep neural network with over 50 layers and it took 40,000 images"}, {"start": 108.0, "end": 110.08, "text": " of different hair styles to train."}, {"start": 110.08, "end": 115.03999999999999, "text": " Now since it is highly unlikely that the input photo shows someone with a hairstyle that"}, {"start": 115.03999999999999, "end": 120.56, "text": " was never ever worn by anyone else, we can look into a big data set of already existing"}, {"start": 120.56, "end": 125.36, "text": " hairstyles and choose the closest one that fits the attributes extracted by the neural"}, {"start": 125.36, "end": 126.36, "text": " network."}, {"start": 126.36, "end": 129.0, "text": " Such a smart idea, loving it."}, {"start": 129.0, "end": 133.52, "text": " You can see how well this works in practice and in the next step, the movement and the"}, {"start": 133.52, "end": 139.32, "text": " appearance of the final hair geometry can be computed in real time through a novel polygonal"}, {"start": 139.32, "end": 140.88, "text": " strip representation."}, {"start": 140.88, "end": 145.6, "text": " The technique also supports retargeting, which means that our gestures can be transferred"}, {"start": 145.6, "end": 147.32, "text": " to different characters."}, {"start": 147.32, "end": 152.32, "text": " The framework is also very robust to different lighting conditions, which means that a differently"}, {"start": 152.32, "end": 155.88, "text": " lead photograph will lead to very similar outputs."}, {"start": 155.88, "end": 158.0, "text": " The same applies for expressions."}, {"start": 158.0, "end": 162.51999999999998, "text": " This is one of those highly desirable details that makes or breaks the usability of a new"}, {"start": 162.51999999999998, "end": 167.79999999999998, "text": " technique in production environments and this one passed with flying colors."}, {"start": 167.8, "end": 172.28, "text": " In these comparisons, you can also see that the quality of the results also smokes the"}, {"start": 172.28, "end": 173.28, "text": " competition."}, {"start": 173.28, "end": 177.84, "text": " A variant of the technology can be downloaded through the link in the video description."}, {"start": 177.84, "end": 197.96, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=iBaWVuaSQ-Q
A Photo Enhancer AI | Two Minute Papers #235
The paper "DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks" and its demo is available here: http://people.ee.ethz.ch/~ihnatova/ http://phancer.com/ Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-3157391/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifejir. Some time ago, smartphone cameras were trying to outpace each other by adding more and more megapixels to their specification sheet. The difference between a half megapixel image and a 4 megapixel image was night and day. However, nowadays, we have entered into diminishing returns as most newer mobile cameras support 8 or more megapixels. At this point, a further resolution increase doesn't lead to significantly more convincing photos. And here is where the processing software takes the spotlight. This paper is about an AI-based technique that takes a poor quality photo and automatically enhances it. Here you can already see what a difference software can make to these photos. Many of these photos were taken with an 8-year-old mobile camera and were enhanced by the AI. This is insanity. Now, before anyone thinks that by enhancement, I'm referring to the classic workflow of adjusting wide balance, color levels and use. No, no, no. By enhancement, I mean the big, heavy hitters, like recreating lost details via super resolution and image in-painting, image de-blurring, denoising and recovering colors that were not even recorded by the camera. The idea is the following. First, we shoot a lot of photos from the same viewpoint with a bunch of cameras ranging from a relatively dated iPhone 3GS, other mid-tier mobile cameras and a state-of-the-art DSLR camera. Then, we hand over this huge bunch of data to a neural network that learns the typical features that are preserved by the better cameras and lost by the worse ones. The network does the same with relating the noise patterns and color profiles to each other. Then, we use this network to recover these lost features and pump up the quality of our lower-tier camera to be as close as possible to a much more expensive model. Super smart idea. Loving it. And you know what is even more brilliant? The validation of this work can take place in a scientific manner, because we don't need to take a group of photographers who will twirl their mass stashes and judge these photos. Though, I'll note that this was also done for good measure. But since we have the photos from the high-quality DSLR camera, we can take the bad photos and hence them with the AI and compare this output to the real DSLR's output. Absolutely brilliant. The source code and pre-trained networks and an online demo is also available. So, let the experiments begin. And make sure to leave a comment with your findings. What do you think about the outputs shown in the website? Did you try your own photo? Let me know in the comments section. A high-quality validation section, lots of results, candid discussion of the limitations in the paper, published source code, pre-trained network and online demos that everyone can try free of charge. Scientists at ETH Zurich max this paper out. This is as good as it gets. If you have enjoyed this episode and would like to help us make better videos in the future, please consider supporting us on Patreon by clicking the letter P at the end screen of this video in a moment or just have a look at the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifejir."}, {"start": 4.6000000000000005, "end": 9.4, "text": " Some time ago, smartphone cameras were trying to outpace each other by adding more and"}, {"start": 9.4, "end": 12.200000000000001, "text": " more megapixels to their specification sheet."}, {"start": 12.200000000000001, "end": 17.52, "text": " The difference between a half megapixel image and a 4 megapixel image was night and day."}, {"start": 17.52, "end": 23.8, "text": " However, nowadays, we have entered into diminishing returns as most newer mobile cameras support"}, {"start": 23.8, "end": 25.88, "text": " 8 or more megapixels."}, {"start": 25.88, "end": 30.599999999999998, "text": " At this point, a further resolution increase doesn't lead to significantly more convincing"}, {"start": 30.599999999999998, "end": 31.599999999999998, "text": " photos."}, {"start": 31.599999999999998, "end": 35.2, "text": " And here is where the processing software takes the spotlight."}, {"start": 35.2, "end": 40.72, "text": " This paper is about an AI-based technique that takes a poor quality photo and automatically"}, {"start": 40.72, "end": 42.239999999999995, "text": " enhances it."}, {"start": 42.239999999999995, "end": 47.120000000000005, "text": " Here you can already see what a difference software can make to these photos."}, {"start": 47.120000000000005, "end": 52.08, "text": " Many of these photos were taken with an 8-year-old mobile camera and were enhanced by the"}, {"start": 52.08, "end": 53.08, "text": " AI."}, {"start": 53.08, "end": 54.44, "text": " This is insanity."}, {"start": 54.44, "end": 60.559999999999995, "text": " Now, before anyone thinks that by enhancement, I'm referring to the classic workflow of adjusting"}, {"start": 60.559999999999995, "end": 63.56, "text": " wide balance, color levels and use."}, {"start": 63.56, "end": 64.56, "text": " No, no, no."}, {"start": 64.56, "end": 70.36, "text": " By enhancement, I mean the big, heavy hitters, like recreating lost details via super resolution"}, {"start": 70.36, "end": 76.68, "text": " and image in-painting, image de-blurring, denoising and recovering colors that were not even"}, {"start": 76.68, "end": 78.84, "text": " recorded by the camera."}, {"start": 78.84, "end": 80.84, "text": " The idea is the following."}, {"start": 80.84, "end": 85.48, "text": " First, we shoot a lot of photos from the same viewpoint with a bunch of cameras ranging"}, {"start": 85.48, "end": 92.64, "text": " from a relatively dated iPhone 3GS, other mid-tier mobile cameras and a state-of-the-art DSLR"}, {"start": 92.64, "end": 93.64, "text": " camera."}, {"start": 93.64, "end": 98.32000000000001, "text": " Then, we hand over this huge bunch of data to a neural network that learns the typical"}, {"start": 98.32000000000001, "end": 103.96000000000001, "text": " features that are preserved by the better cameras and lost by the worse ones."}, {"start": 103.96000000000001, "end": 108.16, "text": " The network does the same with relating the noise patterns and color profiles to each"}, {"start": 108.16, "end": 109.16, "text": " other."}, {"start": 109.16, "end": 114.28, "text": " Then, we use this network to recover these lost features and pump up the quality of our"}, {"start": 114.28, "end": 119.56, "text": " lower-tier camera to be as close as possible to a much more expensive model."}, {"start": 119.56, "end": 121.32, "text": " Super smart idea."}, {"start": 121.32, "end": 122.32, "text": " Loving it."}, {"start": 122.32, "end": 124.03999999999999, "text": " And you know what is even more brilliant?"}, {"start": 124.03999999999999, "end": 128.48, "text": " The validation of this work can take place in a scientific manner, because we don't need"}, {"start": 128.48, "end": 133.64, "text": " to take a group of photographers who will twirl their mass stashes and judge these photos."}, {"start": 133.64, "end": 137.35999999999999, "text": " Though, I'll note that this was also done for good measure."}, {"start": 137.36, "end": 143.20000000000002, "text": " But since we have the photos from the high-quality DSLR camera, we can take the bad photos and"}, {"start": 143.20000000000002, "end": 149.44000000000003, "text": " hence them with the AI and compare this output to the real DSLR's output."}, {"start": 149.44000000000003, "end": 150.64000000000001, "text": " Absolutely brilliant."}, {"start": 150.64000000000001, "end": 155.08, "text": " The source code and pre-trained networks and an online demo is also available."}, {"start": 155.08, "end": 157.56, "text": " So, let the experiments begin."}, {"start": 157.56, "end": 160.12, "text": " And make sure to leave a comment with your findings."}, {"start": 160.12, "end": 163.0, "text": " What do you think about the outputs shown in the website?"}, {"start": 163.0, "end": 164.56, "text": " Did you try your own photo?"}, {"start": 164.56, "end": 166.04000000000002, "text": " Let me know in the comments section."}, {"start": 166.04, "end": 171.35999999999999, "text": " A high-quality validation section, lots of results, candid discussion of the limitations"}, {"start": 171.35999999999999, "end": 176.84, "text": " in the paper, published source code, pre-trained network and online demos that everyone can"}, {"start": 176.84, "end": 178.88, "text": " try free of charge."}, {"start": 178.88, "end": 182.6, "text": " Scientists at ETH Zurich max this paper out."}, {"start": 182.6, "end": 184.32, "text": " This is as good as it gets."}, {"start": 184.32, "end": 188.32, "text": " If you have enjoyed this episode and would like to help us make better videos in the future,"}, {"start": 188.32, "end": 192.88, "text": " please consider supporting us on Patreon by clicking the letter P at the end screen of"}, {"start": 192.88, "end": 196.56, "text": " this video in a moment or just have a look at the video description."}, {"start": 196.56, "end": 225.08, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=pVgC-7QTr40
Building Blocks of AI Interpretability | Two Minute Papers #234
The paper "Building Blocks of Interpretability" is available here: https://distill.pub/2018/building-blocks/ Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1210559/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Károly Zsolnai-Fehér. Hold on to your papers because this is an exclusive look at a new neural network visualization paper that came from a collaboration between Google and the Carnegie Mellon University. The paper is as fresh as it gets because this is the first time I have been given an exclusive look before the paper came out and this means that this video and the paper itself will be published at the same time. This is really cool and it's quite an honor. Thank you very much. Neural networks are powerful learning based tools that are super useful for tasks that are difficult to explain, but easy to demonstrate. For instance, it is hard to mathematically define what a traffic sign is, but we have plenty of photographs of them. So the idea is simple, we label a bunch of photographs with additional data that says this is a traffic sign and this one isn't. And feed this to a learning algorithm. As a result, neural networks have been able to perform traffic sign detection at the superhuman level for many years now. Scientists at Google Deep might have also shown us that if we combine a neural network with reinforcement learning, we can get it to look at the screen and play computer games on a very high level. It is incredible to see problems that seemed impossible for many decades cramble one by one in quick succession over the last few years. However, we have a problem and that problem is interpretability. There is no doubt that these neural networks are efficient, however, they cannot explain their decisions to us, at least not in a way that we can interpret. To alleviate this, earlier works tried to visualize these networks on the level of neurons, particularly what kinds of inputs make these individual neurons extremely excited. This paper is about combining previously known techniques to unlock more powerful ways to visualize these networks. For instance, we can combine the individual neuron visualization with class attributions. This offers a better way of understanding how a neuron network decides whether a photo depicts a labrador or a tiger cat. Here we can see which part of the image activates a given neuron and what the neuron is looking for. So we see the final decision as to which class this image should belong to. The next visualization technique shows us which set of detectors contributed to the final decision and how much they contributed exactly. Another way towards better interpretability is to decrease the overwhelming number of neurons into smaller groups with more semantic meaning. This process is referred to as factorization or neuron grouping in the paper. If we do this, we can obtain highly descriptive labels that we can endow with intuitive meanings. For instance, here we see that in order for the network to classify the image as a labrador, it needs to see a combination of floppy ears, doggy forehead, doggy mouth, and a bunch of fur. We can also construct a nice activation map to show which part of the image makes our groups excited. Please note that we have only scratched the surface. This is a beautiful paper and it has tons of more results available exactly from this moment with plenty of interactive examples you can play with. Not only that, but the code is open sourced so you are also able to reproduce these visualizations with little to no setup. Make sure to have a look at it in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.46, "end": 9.540000000000001, "text": " Hold on to your papers because this is an exclusive look at a new neural network visualization"}, {"start": 9.540000000000001, "end": 14.96, "text": " paper that came from a collaboration between Google and the Carnegie Mellon University."}, {"start": 14.96, "end": 19.88, "text": " The paper is as fresh as it gets because this is the first time I have been given an"}, {"start": 19.88, "end": 24.72, "text": " exclusive look before the paper came out and this means that this video and the paper"}, {"start": 24.72, "end": 28.2, "text": " itself will be published at the same time."}, {"start": 28.2, "end": 30.64, "text": " This is really cool and it's quite an honor."}, {"start": 30.64, "end": 31.919999999999998, "text": " Thank you very much."}, {"start": 31.919999999999998, "end": 36.68, "text": " Neural networks are powerful learning based tools that are super useful for tasks that are"}, {"start": 36.68, "end": 40.6, "text": " difficult to explain, but easy to demonstrate."}, {"start": 40.6, "end": 45.64, "text": " For instance, it is hard to mathematically define what a traffic sign is, but we have plenty"}, {"start": 45.64, "end": 47.28, "text": " of photographs of them."}, {"start": 47.28, "end": 52.239999999999995, "text": " So the idea is simple, we label a bunch of photographs with additional data that says"}, {"start": 52.239999999999995, "end": 55.8, "text": " this is a traffic sign and this one isn't."}, {"start": 55.8, "end": 58.199999999999996, "text": " And feed this to a learning algorithm."}, {"start": 58.199999999999996, "end": 63.4, "text": " As a result, neural networks have been able to perform traffic sign detection at the superhuman"}, {"start": 63.4, "end": 65.88, "text": " level for many years now."}, {"start": 65.88, "end": 70.2, "text": " Scientists at Google Deep might have also shown us that if we combine a neural network with"}, {"start": 70.2, "end": 74.96, "text": " reinforcement learning, we can get it to look at the screen and play computer games on"}, {"start": 74.96, "end": 76.4, "text": " a very high level."}, {"start": 76.4, "end": 81.92, "text": " It is incredible to see problems that seemed impossible for many decades cramble one by"}, {"start": 81.92, "end": 85.36, "text": " one in quick succession over the last few years."}, {"start": 85.36, "end": 89.52, "text": " However, we have a problem and that problem is interpretability."}, {"start": 89.52, "end": 94.16, "text": " There is no doubt that these neural networks are efficient, however, they cannot explain"}, {"start": 94.16, "end": 98.56, "text": " their decisions to us, at least not in a way that we can interpret."}, {"start": 98.56, "end": 104.4, "text": " To alleviate this, earlier works tried to visualize these networks on the level of neurons,"}, {"start": 104.4, "end": 110.12, "text": " particularly what kinds of inputs make these individual neurons extremely excited."}, {"start": 110.12, "end": 115.24, "text": " This paper is about combining previously known techniques to unlock more powerful ways"}, {"start": 115.24, "end": 117.24, "text": " to visualize these networks."}, {"start": 117.24, "end": 123.32, "text": " For instance, we can combine the individual neuron visualization with class attributions."}, {"start": 123.32, "end": 128.4, "text": " This offers a better way of understanding how a neuron network decides whether a photo"}, {"start": 128.4, "end": 132.12, "text": " depicts a labrador or a tiger cat."}, {"start": 132.12, "end": 137.0, "text": " Here we can see which part of the image activates a given neuron and what the neuron is looking"}, {"start": 137.0, "end": 138.0, "text": " for."}, {"start": 138.0, "end": 145.48, "text": " So we see the final decision as to which class this image should belong to."}, {"start": 145.48, "end": 150.16, "text": " The next visualization technique shows us which set of detectors contributed to the final"}, {"start": 150.16, "end": 173.64, "text": " decision and how much they contributed exactly."}, {"start": 173.64, "end": 178.07999999999998, "text": " Another way towards better interpretability is to decrease the overwhelming number of"}, {"start": 178.08, "end": 182.12, "text": " neurons into smaller groups with more semantic meaning."}, {"start": 182.12, "end": 187.76000000000002, "text": " This process is referred to as factorization or neuron grouping in the paper."}, {"start": 187.76000000000002, "end": 192.4, "text": " If we do this, we can obtain highly descriptive labels that we can endow with intuitive"}, {"start": 192.4, "end": 193.4, "text": " meanings."}, {"start": 193.4, "end": 198.64000000000001, "text": " For instance, here we see that in order for the network to classify the image as a labrador,"}, {"start": 198.64000000000001, "end": 205.12, "text": " it needs to see a combination of floppy ears, doggy forehead, doggy mouth, and a bunch"}, {"start": 205.12, "end": 206.12, "text": " of fur."}, {"start": 206.12, "end": 210.84, "text": " We can also construct a nice activation map to show which part of the image makes our"}, {"start": 210.84, "end": 212.24, "text": " groups excited."}, {"start": 212.24, "end": 214.84, "text": " Please note that we have only scratched the surface."}, {"start": 214.84, "end": 219.96, "text": " This is a beautiful paper and it has tons of more results available exactly from this"}, {"start": 219.96, "end": 223.84, "text": " moment with plenty of interactive examples you can play with."}, {"start": 223.84, "end": 229.84, "text": " Not only that, but the code is open sourced so you are also able to reproduce these visualizations"}, {"start": 229.84, "end": 231.68, "text": " with little to no setup."}, {"start": 231.68, "end": 234.0, "text": " Make sure to have a look at it in the video description."}, {"start": 234.0, "end": 237.68, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=izZofvgaIig
Why Should We Trust An AI? | Two Minute Papers #233
The paper "Why Should I Trust You? - Explaining the Predictions of Any Classifier" and its implementation is available here: http://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf https://github.com/marcotcr/lime Our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-563428/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifeher. Through over 200 episodes of this series, we talked about many learning-based algorithms that are able to solve problems that previously seemed completely impossible. They can look at an image and describe what they depict in a sentence, or even turn video game graphics into reality and back. Amazing new results keep appearing every single week. However, an important thing that we need to solve is that if we deploy these neural networks in a production environment, we would want to know if we are relying on a good or bad AI's decision. The narrative is very simple. If we don't trust a classifier, we won't use it. And perhaps the best way of earning the trust of a human would be if the AI could explain how it came to a given decision. Strictly speaking, a neural network can explain it to us, but it will show us hundreds of thousands of neural activations that are completely unusable for any sort of intuitive reasoning. So, what is even more difficult to solve is that this explanation happens in a way that we can interpret. An earlier approach used decision trees that described what the learner looks at and how it uses this information to arrive to a conclusion. This new work is quite different. For instance, imagine that a neural network would look at all the information we know about the patient and tell us that this patient likely has the flu. And in the meantime, it could tell us that the fact that the patient has a headache and sneezes a lot contributed to the conclusion that he has the flu, but the lack of fatigue is notable evidence against it. Our doctor could take this information and instead of blindly relying on the output, could make a more informed decision. A fine example of a case where AI does not replace but augment human labor. An elegant tool for a more civilized age. Here, we see an example image where the classifier explains which region contributes to the decision that this image depicts a cat and which region seems to be counter evidence. We can use this not only for tabulated patient data and images, but text as well. In this other example, we try to find out whether a piece of written text is about Christianity or atheism. Note that the decision itself is not as simple as looking for a few keywords. Even a mid-tier classifier is much more sophisticated than that. But it can tell us about the main contributing factors. A big additional selling point is that this technique is model agnostic, which means that it can be applied to other learning algorithms that are able to perform classification. It is also a possibility that an AI is only right by chance and if this is the case, we should definitely know about that. And here, in this example, with the additional explanation, it is rather easy to find that we have a bad model that looks at the background of the image and thinks that it is the fur of a wolf. The tests indicate that humans make significantly better decisions when they lean on explanations that are extracted by this technique. The source code of this project is also available. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zona Ifeher."}, {"start": 4.5200000000000005, "end": 9.24, "text": " Through over 200 episodes of this series, we talked about many learning-based algorithms"}, {"start": 9.24, "end": 14.08, "text": " that are able to solve problems that previously seemed completely impossible."}, {"start": 14.08, "end": 19.92, "text": " They can look at an image and describe what they depict in a sentence, or even turn video"}, {"start": 19.92, "end": 23.44, "text": " game graphics into reality and back."}, {"start": 23.44, "end": 26.400000000000002, "text": " Amazing new results keep appearing every single week."}, {"start": 26.4, "end": 31.0, "text": " However, an important thing that we need to solve is that if we deploy these neural networks"}, {"start": 31.0, "end": 35.6, "text": " in a production environment, we would want to know if we are relying on a good or bad"}, {"start": 35.6, "end": 37.0, "text": " AI's decision."}, {"start": 37.0, "end": 38.839999999999996, "text": " The narrative is very simple."}, {"start": 38.839999999999996, "end": 41.64, "text": " If we don't trust a classifier, we won't use it."}, {"start": 41.64, "end": 46.519999999999996, "text": " And perhaps the best way of earning the trust of a human would be if the AI could explain"}, {"start": 46.519999999999996, "end": 48.519999999999996, "text": " how it came to a given decision."}, {"start": 48.519999999999996, "end": 53.0, "text": " Strictly speaking, a neural network can explain it to us, but it will show us hundreds"}, {"start": 53.0, "end": 58.72, "text": " of thousands of neural activations that are completely unusable for any sort of intuitive"}, {"start": 58.72, "end": 59.72, "text": " reasoning."}, {"start": 59.72, "end": 64.68, "text": " So, what is even more difficult to solve is that this explanation happens in a way that"}, {"start": 64.68, "end": 66.12, "text": " we can interpret."}, {"start": 66.12, "end": 70.72, "text": " An earlier approach used decision trees that described what the learner looks at and how"}, {"start": 70.72, "end": 74.12, "text": " it uses this information to arrive to a conclusion."}, {"start": 74.12, "end": 76.32, "text": " This new work is quite different."}, {"start": 76.32, "end": 80.64, "text": " For instance, imagine that a neural network would look at all the information we know about"}, {"start": 80.64, "end": 84.56, "text": " the patient and tell us that this patient likely has the flu."}, {"start": 84.56, "end": 88.8, "text": " And in the meantime, it could tell us that the fact that the patient has a headache and"}, {"start": 88.8, "end": 93.84, "text": " sneezes a lot contributed to the conclusion that he has the flu, but the lack of fatigue"}, {"start": 93.84, "end": 95.8, "text": " is notable evidence against it."}, {"start": 95.8, "end": 100.44, "text": " Our doctor could take this information and instead of blindly relying on the output,"}, {"start": 100.44, "end": 102.52, "text": " could make a more informed decision."}, {"start": 102.52, "end": 107.88, "text": " A fine example of a case where AI does not replace but augment human labor."}, {"start": 107.88, "end": 110.6, "text": " An elegant tool for a more civilized age."}, {"start": 110.6, "end": 115.28, "text": " Here, we see an example image where the classifier explains which region contributes to the"}, {"start": 115.28, "end": 120.67999999999999, "text": " decision that this image depicts a cat and which region seems to be counter evidence."}, {"start": 120.67999999999999, "end": 126.11999999999999, "text": " We can use this not only for tabulated patient data and images, but text as well."}, {"start": 126.11999999999999, "end": 131.76, "text": " In this other example, we try to find out whether a piece of written text is about Christianity"}, {"start": 131.76, "end": 133.4, "text": " or atheism."}, {"start": 133.4, "end": 137.72, "text": " Note that the decision itself is not as simple as looking for a few keywords."}, {"start": 137.72, "end": 141.68, "text": " Even a mid-tier classifier is much more sophisticated than that."}, {"start": 141.68, "end": 144.92, "text": " But it can tell us about the main contributing factors."}, {"start": 144.92, "end": 149.68, "text": " A big additional selling point is that this technique is model agnostic, which means that"}, {"start": 149.68, "end": 154.88, "text": " it can be applied to other learning algorithms that are able to perform classification."}, {"start": 154.88, "end": 160.16, "text": " It is also a possibility that an AI is only right by chance and if this is the case, we"}, {"start": 160.16, "end": 162.0, "text": " should definitely know about that."}, {"start": 162.0, "end": 167.0, "text": " And here, in this example, with the additional explanation, it is rather easy to find that"}, {"start": 167.0, "end": 171.96, "text": " we have a bad model that looks at the background of the image and thinks that it is the fur"}, {"start": 171.96, "end": 173.12, "text": " of a wolf."}, {"start": 173.12, "end": 178.72, "text": " The tests indicate that humans make significantly better decisions when they lean on explanations"}, {"start": 178.72, "end": 180.8, "text": " that are extracted by this technique."}, {"start": 180.8, "end": 183.56, "text": " The source code of this project is also available."}, {"start": 183.56, "end": 203.48000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hzpxXZJQNFg
DeepMind's WaveNet, 1000 Times Faster | Two Minute Papers #232
The paper "Parallel WaveNet: Fast High-Fidelity Speech Synthesis" is available here: https://arxiv.org/abs/1711.10433 Our Patreon page: https://www.patreon.com/TwoMinutePapers DeepMind's Blog: https://deepmind.com/blog/wavenet-launches-google-assistant/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-3172471/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kato Yjona Yfahir. Due to popular demand, here is the new Deep Mind Paper on WaveNet. WaveNet is a text to speech algorithm that takes a sentence as an input and gives us audio footage of these words being uttered by a person of our choice. Let's listen to some results from the original algorithm. Note that these are all synthesized by the AI. The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser. The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser. Aspects of the Sublime in English Poetry and Painting, 1770-1850. Aspects of the Sublime in English Poetry and Painting, 1770-1850. All this requires is some training data from this person's voice, typically 10-30 hours, and a ton of computational power. The computational power part is especially of interest because we have to produce over 16-24,000 samples for each second of continuous audio footage. And unfortunately, as you can see here, these new samples are generated one by one. And since today's graphics cards are highly parallel, this means that it is a waste to get them to have one compute unit that does all the work while the others are sitting there twiddling their thumbs. We need to make this more parallel somehow. So, the solution is simple. Instead of one, we can just simply make more samples in parallel. No, no, no, no, no. It doesn't work like that. And the reason for this is that speech is not like Randall's noise. It is highly coherent where the new samples are highly dependent on the previous ones. We can only create one new sample at a time. So, how can we create the new waveform in one go using these many compute units in parallel? This new wave-nut variant starts out from white noise and applies changes to it over time to morph it into the output speech waveform. The changes take place in parallel over the entirety of the signal, so that's a good sign. It works by creating a reference network that is slow, but correct. Let's call this the Teacher Network. And the new algorithm arises as a student network which tries to mimic what the teacher does, but the student tries to be more efficient at that. This has a similar vibe to generative adversarial networks where we have two networks. One is actively trying to fool the other one, while this other one tries to better distinguish fake inputs from real ones. However, it is fundamentally different because of the fact that the student does not try to fool the teacher, but mimic it while being more efficient. And this yields a blistering fast version of wave-nut that is over a thousand times faster than its predecessor. It is not real time, it is 20 times faster than real time. And you know what the best part is? Usually, there are heavy trade-offs for this. But this time, the validation section of the paper reveals that there is no perceived difference in the outputs from the original algorithm. Hell yeah! So, where can we try it? Well, it is already deployed online in Google Assistant, in multiple English and Japanese voices. So, as you see, I was wrong. I said that a few papers down the line, it will definitely be done in real time. Apparently, with this new work, it is not a few papers down the line, it is one, and it is not a bit faster, but a thousand times faster. Things are getting out of hand real quick, and I mean this in the best possible way. What a time to be alive! This is one incredible and highly inspiring work. Make sure to have a look at the paper, perfect training for the mind. As always, it is available in the video description. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Yjona Yfahir."}, {"start": 4.6000000000000005, "end": 9.0, "text": " Due to popular demand, here is the new Deep Mind Paper on WaveNet."}, {"start": 9.0, "end": 13.6, "text": " WaveNet is a text to speech algorithm that takes a sentence as an input and gives us"}, {"start": 13.6, "end": 18.400000000000002, "text": " audio footage of these words being uttered by a person of our choice."}, {"start": 18.400000000000002, "end": 21.400000000000002, "text": " Let's listen to some results from the original algorithm."}, {"start": 21.400000000000002, "end": 25.6, "text": " Note that these are all synthesized by the AI."}, {"start": 25.6, "end": 32.0, "text": " The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser."}, {"start": 32.0, "end": 39.0, "text": " The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser."}, {"start": 39.0, "end": 45.0, "text": " Aspects of the Sublime in English Poetry and Painting, 1770-1850."}, {"start": 45.0, "end": 50.6, "text": " Aspects of the Sublime in English Poetry and Painting, 1770-1850."}, {"start": 50.6, "end": 56.0, "text": " All this requires is some training data from this person's voice, typically 10-30 hours,"}, {"start": 56.0, "end": 59.0, "text": " and a ton of computational power."}, {"start": 59.0, "end": 66.4, "text": " The computational power part is especially of interest because we have to produce over 16-24,000 samples"}, {"start": 66.4, "end": 69.6, "text": " for each second of continuous audio footage."}, {"start": 69.6, "end": 74.6, "text": " And unfortunately, as you can see here, these new samples are generated one by one."}, {"start": 74.6, "end": 77.6, "text": " And since today's graphics cards are highly parallel,"}, {"start": 77.6, "end": 82.6, "text": " this means that it is a waste to get them to have one compute unit that does all the work"}, {"start": 82.6, "end": 85.6, "text": " while the others are sitting there twiddling their thumbs."}, {"start": 85.6, "end": 87.8, "text": " We need to make this more parallel somehow."}, {"start": 87.8, "end": 89.6, "text": " So, the solution is simple."}, {"start": 89.6, "end": 93.39999999999999, "text": " Instead of one, we can just simply make more samples in parallel."}, {"start": 93.39999999999999, "end": 95.8, "text": " No, no, no, no, no. It doesn't work like that."}, {"start": 95.8, "end": 99.39999999999999, "text": " And the reason for this is that speech is not like Randall's noise."}, {"start": 99.39999999999999, "end": 104.39999999999999, "text": " It is highly coherent where the new samples are highly dependent on the previous ones."}, {"start": 104.39999999999999, "end": 107.19999999999999, "text": " We can only create one new sample at a time."}, {"start": 107.2, "end": 113.2, "text": " So, how can we create the new waveform in one go using these many compute units in parallel?"}, {"start": 113.2, "end": 116.4, "text": " This new wave-nut variant starts out from white noise"}, {"start": 116.4, "end": 121.8, "text": " and applies changes to it over time to morph it into the output speech waveform."}, {"start": 121.8, "end": 127.4, "text": " The changes take place in parallel over the entirety of the signal, so that's a good sign."}, {"start": 127.4, "end": 131.4, "text": " It works by creating a reference network that is slow, but correct."}, {"start": 131.4, "end": 133.6, "text": " Let's call this the Teacher Network."}, {"start": 133.6, "end": 139.4, "text": " And the new algorithm arises as a student network which tries to mimic what the teacher does,"}, {"start": 139.4, "end": 142.6, "text": " but the student tries to be more efficient at that."}, {"start": 142.6, "end": 147.6, "text": " This has a similar vibe to generative adversarial networks where we have two networks."}, {"start": 147.6, "end": 150.2, "text": " One is actively trying to fool the other one,"}, {"start": 150.2, "end": 154.79999999999998, "text": " while this other one tries to better distinguish fake inputs from real ones."}, {"start": 154.79999999999998, "end": 160.6, "text": " However, it is fundamentally different because of the fact that the student does not try to fool the teacher,"}, {"start": 160.6, "end": 163.6, "text": " but mimic it while being more efficient."}, {"start": 163.6, "end": 170.6, "text": " And this yields a blistering fast version of wave-nut that is over a thousand times faster than its predecessor."}, {"start": 170.6, "end": 175.2, "text": " It is not real time, it is 20 times faster than real time."}, {"start": 175.2, "end": 177.2, "text": " And you know what the best part is?"}, {"start": 177.2, "end": 179.4, "text": " Usually, there are heavy trade-offs for this."}, {"start": 179.4, "end": 187.0, "text": " But this time, the validation section of the paper reveals that there is no perceived difference in the outputs from the original algorithm."}, {"start": 187.0, "end": 188.2, "text": " Hell yeah!"}, {"start": 188.2, "end": 190.2, "text": " So, where can we try it?"}, {"start": 190.2, "end": 196.6, "text": " Well, it is already deployed online in Google Assistant, in multiple English and Japanese voices."}, {"start": 196.6, "end": 198.79999999999998, "text": " So, as you see, I was wrong."}, {"start": 198.79999999999998, "end": 203.6, "text": " I said that a few papers down the line, it will definitely be done in real time."}, {"start": 203.6, "end": 208.39999999999998, "text": " Apparently, with this new work, it is not a few papers down the line, it is one,"}, {"start": 208.39999999999998, "end": 212.0, "text": " and it is not a bit faster, but a thousand times faster."}, {"start": 212.0, "end": 216.79999999999998, "text": " Things are getting out of hand real quick, and I mean this in the best possible way."}, {"start": 216.79999999999998, "end": 218.39999999999998, "text": " What a time to be alive!"}, {"start": 218.4, "end": 222.0, "text": " This is one incredible and highly inspiring work."}, {"start": 222.0, "end": 225.4, "text": " Make sure to have a look at the paper, perfect training for the mind."}, {"start": 225.4, "end": 228.20000000000002, "text": " As always, it is available in the video description."}, {"start": 228.2, "end": 249.79999999999998, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=uGhyOBSzdTs
Bubble Collision Simulations in Milliseconds | Two Minute Papers #231
The paper "A Hyperbolic Geometric Flow for Evolving Films and Foams" is available here: https://sadashigeishida.bitbucket.io/hgf/index.html Recommended for you: 1. Reddit discussion on bubble thickness measurements - https://www.reddit.com/r/askscience/comments/1wva6u/is_it_possible_to_measure_the_thickness_of_a_soap/ 2. An early episode on bubbles - https://www.youtube.com/watch?v=uj8b5mu0P7Y We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1916692/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This paper is about simulating on a computer what happens when bubbles collide. Prepare for lots of beautiful footage. This is typically done by simulating the Navier Stokes equations that describe the evolution of the velocity within a piece of fluid over time. However, because the world around us is a continuum, we cannot compute these quantities in an infinite number of points. So, we have to subdivide the 3D space into a grid and compute them only in these grid points. The finer the grid, the more details appear in our simulations. If we try to simulate what happens when these bubbles collide, we would need to create a grid that can capture these details. This is an issue because the thickness of a bubble film is in the order of 10 to 800 nanometers and this would require a hopelessly fine high resolution grid. By the way, measuring the thickness of bubbles is a science of its own, there is a fantastic reddit discussion on it, I put a link to it in the video description, make sure to check it out. So, these overly fine grids take too long to compute, so what do we do? Well, first we need to focus on how to directly compute how the shape of soap bubbles evolves over time. Fortunately, from Belgian physicist Joseph Plato, we know that they seek to reduce their surface area, but retain their volume over time. One of the many beautiful phenomena in nature. So, this shall be the first step. We simulate forces that create the appropriate shape changes and proceed into an intermediate state. However, by pushing the film inwards, its volume has decreased. Therefore, this intermediate state is not how it should look in nature. This is to be remedied now where we apply a volume correction step. In the validation section, it is shown that the results follow Plato's laws quite closely. Also, you know well that my favorite kind of validation is when we let reality be our judge, and in this work, the results have been compared to a real life experimental setup and proved to be very close to it. Take a little time to absorb this. We can write a computer program that reproduces what would happen in reality and result in lots of beautiful video footage. Loving it. And the best part is that the first surface evolution step is done through an effective implementation of the hyperbolic mean curvature flow, which means that the entirety of the process is typically 3 to 20 times faster than the state of the art while being more robust in handling splitting and merging scenarios. The computation times are now in the order of milliseconds instead of seconds. The earlier work in this comparison was also showcased in two minute papers if I see it correctly, it was in episode number 18. Holy matter of papers, how far we have come since. I've put a link to it in the video description. The paper is beautifully written and there are plenty of goodies there in, for instance, an issue with non-manifold junctions is addressed, so make sure to have a look. The source code of this project is also available. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 9.68, "text": " This paper is about simulating on a computer what happens when bubbles collide."}, {"start": 9.68, "end": 12.24, "text": " Prepare for lots of beautiful footage."}, {"start": 12.24, "end": 17.12, "text": " This is typically done by simulating the Navier Stokes equations that describe the evolution"}, {"start": 17.12, "end": 20.52, "text": " of the velocity within a piece of fluid over time."}, {"start": 20.52, "end": 25.240000000000002, "text": " However, because the world around us is a continuum, we cannot compute these quantities"}, {"start": 25.240000000000002, "end": 27.2, "text": " in an infinite number of points."}, {"start": 27.2, "end": 33.64, "text": " So, we have to subdivide the 3D space into a grid and compute them only in these grid points."}, {"start": 33.64, "end": 37.48, "text": " The finer the grid, the more details appear in our simulations."}, {"start": 37.48, "end": 42.4, "text": " If we try to simulate what happens when these bubbles collide, we would need to create a grid"}, {"start": 42.4, "end": 44.4, "text": " that can capture these details."}, {"start": 44.4, "end": 50.879999999999995, "text": " This is an issue because the thickness of a bubble film is in the order of 10 to 800 nanometers"}, {"start": 50.879999999999995, "end": 54.84, "text": " and this would require a hopelessly fine high resolution grid."}, {"start": 54.84, "end": 59.720000000000006, "text": " By the way, measuring the thickness of bubbles is a science of its own, there is a fantastic"}, {"start": 59.720000000000006, "end": 63.92, "text": " reddit discussion on it, I put a link to it in the video description, make sure to check"}, {"start": 63.92, "end": 64.92, "text": " it out."}, {"start": 64.92, "end": 69.4, "text": " So, these overly fine grids take too long to compute, so what do we do?"}, {"start": 69.4, "end": 75.16, "text": " Well, first we need to focus on how to directly compute how the shape of soap bubbles evolves"}, {"start": 75.16, "end": 76.16, "text": " over time."}, {"start": 76.16, "end": 81.08000000000001, "text": " Fortunately, from Belgian physicist Joseph Plato, we know that they seek to reduce their"}, {"start": 81.08000000000001, "end": 84.76, "text": " surface area, but retain their volume over time."}, {"start": 84.76, "end": 87.48, "text": " One of the many beautiful phenomena in nature."}, {"start": 87.48, "end": 89.72, "text": " So, this shall be the first step."}, {"start": 89.72, "end": 94.80000000000001, "text": " We simulate forces that create the appropriate shape changes and proceed into an intermediate"}, {"start": 94.80000000000001, "end": 95.80000000000001, "text": " state."}, {"start": 95.80000000000001, "end": 99.64, "text": " However, by pushing the film inwards, its volume has decreased."}, {"start": 99.64, "end": 103.72, "text": " Therefore, this intermediate state is not how it should look in nature."}, {"start": 103.72, "end": 107.76, "text": " This is to be remedied now where we apply a volume correction step."}, {"start": 107.76, "end": 113.12, "text": " In the validation section, it is shown that the results follow Plato's laws quite closely."}, {"start": 113.12, "end": 118.04, "text": " Also, you know well that my favorite kind of validation is when we let reality be our"}, {"start": 118.04, "end": 123.52000000000001, "text": " judge, and in this work, the results have been compared to a real life experimental setup"}, {"start": 123.52000000000001, "end": 125.96000000000001, "text": " and proved to be very close to it."}, {"start": 125.96000000000001, "end": 127.84, "text": " Take a little time to absorb this."}, {"start": 127.84, "end": 133.8, "text": " We can write a computer program that reproduces what would happen in reality and result in"}, {"start": 133.8, "end": 136.24, "text": " lots of beautiful video footage."}, {"start": 136.24, "end": 137.44, "text": " Loving it."}, {"start": 137.44, "end": 142.04000000000002, "text": " And the best part is that the first surface evolution step is done through an effective"}, {"start": 142.04, "end": 147.4, "text": " implementation of the hyperbolic mean curvature flow, which means that the entirety of the"}, {"start": 147.4, "end": 154.16, "text": " process is typically 3 to 20 times faster than the state of the art while being more robust"}, {"start": 154.16, "end": 156.95999999999998, "text": " in handling splitting and merging scenarios."}, {"start": 156.95999999999998, "end": 162.35999999999999, "text": " The computation times are now in the order of milliseconds instead of seconds."}, {"start": 162.35999999999999, "end": 167.07999999999998, "text": " The earlier work in this comparison was also showcased in two minute papers if I see"}, {"start": 167.07999999999998, "end": 171.04, "text": " it correctly, it was in episode number 18."}, {"start": 171.04, "end": 174.07999999999998, "text": " Holy matter of papers, how far we have come since."}, {"start": 174.07999999999998, "end": 176.4, "text": " I've put a link to it in the video description."}, {"start": 176.4, "end": 181.16, "text": " The paper is beautifully written and there are plenty of goodies there in, for instance,"}, {"start": 181.16, "end": 185.35999999999999, "text": " an issue with non-manifold junctions is addressed, so make sure to have a look."}, {"start": 185.35999999999999, "end": 188.0, "text": " The source code of this project is also available."}, {"start": 188.0, "end": 207.92000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HANeLG0l2GA
This AI Sings | Two Minute Papers #230
The paper "A Neural Parametric Singing Synthesizer" is available here: http://www.dtic.upf.edu/~mblaauw/NPSS/ Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Check out my new voice here! https://goo.gl/z6zxuT Jean-Michel Jarre Vocoder song: https://open.spotify.com/album/0ZKglE5xlIqsWmtQHn9WxZ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Vocoder video credit: Shal Music/FX - https://www.youtube.com/watch?v=uXn1up-9D78 Thumbnail background image credit: https://flic.kr/p/GAhDRa Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Jolnai-Fehir. This work is about building an AI vocoder that is able to synthesize believable singing from MIDI and lyrics as inputs. But first, what is a vocoder? It works kind of like this. A vocoder is an audio processor that is almost unmistakable in its character. It is used to combine the tonal qualities of one sound source, call a carrier, with the frequency-specific movements of a second signal called a modulator. This allows you to create a computer voice. Fellow Scholars, who are fans of Jean-Michel-Jarge music, are likely very familiar with this effect. I've put a link to an example song in the video description. Make sure to leave a comment with your favorite songs with vocoders, so I and other fellow Scholars can also nerd out on them. And now, about the MIDI and lyrics terms. The lyrics part is a simple text file containing the words that this synthesized voice should sing, and the MIDI is data that describes the pitch, length, and velocity of each sound. With a little simplification, we could say that the score is given as an input and the algorithm has to output the singing footage. We will talk about the algorithm in a moment, but for now, let's listen to it. Wow! So this is a vocoder. This means that it separates the pitch and thumb-bar components of the voice, therefore, the waveforms are not generated directly, which is a key difference from Google DeepMind's wave net. This leads to two big advantages. One, the generation times are quite favorable, and by favorable, I guess you're hoping for real time. Well, hold on to your papers because it is not real time, it is 10 to 15 times real time. And two, this way, the algorithm will only need a modest amount of training data to function well. Here you can see the input phonemes that make up the syllables of the lyrics, each typically corresponding to one note. This is then connected to a modified wave net architecture that uses two by one dilated convolutions. This means that the dilation factor is doubled in each layer, thereby introducing an exponential growth in the receptive field of the model. This helps us keep the parameter count down, which enables training on small datasets. As validation, the mean opinion scores have been recorded in a previous episode, we discussed that this is a number that describes how a sound sample would pass as genuine human speech or singing. The test showed that this new method is well ahead of the competition, approximately, midway between the previous works and the reference singing footage. There are plenty of other tests in the paper. This is just one of many, so make sure to have a look. This is one important stepping stone towards synthesizing singing that is highly usable in digital media and where generation is faster than real time. Creating a MIDI input is a piece of cake with a MIDI master keyboard or we can even draw the notes by hand in many digital audio workstation programs. After that, writing the lyrics is as simple as it gets and doesn't need any additional software. Tools like this are going to make this process accessible to everyone, loving it. If you would like to help us create more elaborate videos, please consider supporting us on Patreon. We also support one-time payments through cryptos like Bitcoin, Ethereum and Litecoin. Everything is available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Jolnai-Fehir."}, {"start": 4.32, "end": 9.76, "text": " This work is about building an AI vocoder that is able to synthesize believable singing"}, {"start": 9.76, "end": 12.36, "text": " from MIDI and lyrics as inputs."}, {"start": 12.36, "end": 14.8, "text": " But first, what is a vocoder?"}, {"start": 14.8, "end": 16.8, "text": " It works kind of like this."}, {"start": 16.8, "end": 21.76, "text": " A vocoder is an audio processor that is almost unmistakable in its character."}, {"start": 21.76, "end": 26.36, "text": " It is used to combine the tonal qualities of one sound source, call a carrier,"}, {"start": 26.36, "end": 30.64, "text": " with the frequency-specific movements of a second signal called a modulator."}, {"start": 30.64, "end": 33.56, "text": " This allows you to create a computer voice."}, {"start": 33.56, "end": 39.12, "text": " Fellow Scholars, who are fans of Jean-Michel-Jarge music, are likely very familiar with this effect."}, {"start": 39.12, "end": 42.08, "text": " I've put a link to an example song in the video description."}, {"start": 42.08, "end": 45.44, "text": " Make sure to leave a comment with your favorite songs with vocoders,"}, {"start": 45.44, "end": 49.16, "text": " so I and other fellow Scholars can also nerd out on them."}, {"start": 49.16, "end": 52.0, "text": " And now, about the MIDI and lyrics terms."}, {"start": 52.0, "end": 57.84, "text": " The lyrics part is a simple text file containing the words that this synthesized voice should sing,"}, {"start": 57.84, "end": 63.8, "text": " and the MIDI is data that describes the pitch, length, and velocity of each sound."}, {"start": 63.8, "end": 68.44, "text": " With a little simplification, we could say that the score is given as an input"}, {"start": 68.44, "end": 71.48, "text": " and the algorithm has to output the singing footage."}, {"start": 71.48, "end": 84.68, "text": " We will talk about the algorithm in a moment, but for now, let's listen to it."}, {"start": 101.68, "end": 106.68, "text": " Wow!"}, {"start": 106.68, "end": 108.68, "text": " So this is a vocoder."}, {"start": 108.68, "end": 112.88000000000001, "text": " This means that it separates the pitch and thumb-bar components of the voice,"}, {"start": 112.88000000000001, "end": 119.08000000000001, "text": " therefore, the waveforms are not generated directly, which is a key difference from Google DeepMind's wave net."}, {"start": 119.08000000000001, "end": 121.68, "text": " This leads to two big advantages."}, {"start": 121.68, "end": 128.08, "text": " One, the generation times are quite favorable, and by favorable, I guess you're hoping for real time."}, {"start": 128.08, "end": 135.28, "text": " Well, hold on to your papers because it is not real time, it is 10 to 15 times real time."}, {"start": 135.28, "end": 141.28, "text": " And two, this way, the algorithm will only need a modest amount of training data to function well."}, {"start": 141.28, "end": 145.28, "text": " Here you can see the input phonemes that make up the syllables of the lyrics,"}, {"start": 145.28, "end": 148.08, "text": " each typically corresponding to one note."}, {"start": 148.08, "end": 154.88000000000002, "text": " This is then connected to a modified wave net architecture that uses two by one dilated convolutions."}, {"start": 154.88, "end": 163.07999999999998, "text": " This means that the dilation factor is doubled in each layer, thereby introducing an exponential growth in the receptive field of the model."}, {"start": 163.07999999999998, "end": 168.28, "text": " This helps us keep the parameter count down, which enables training on small datasets."}, {"start": 168.28, "end": 172.88, "text": " As validation, the mean opinion scores have been recorded in a previous episode,"}, {"start": 172.88, "end": 180.48, "text": " we discussed that this is a number that describes how a sound sample would pass as genuine human speech or singing."}, {"start": 180.48, "end": 185.28, "text": " The test showed that this new method is well ahead of the competition, approximately,"}, {"start": 185.28, "end": 189.67999999999998, "text": " midway between the previous works and the reference singing footage."}, {"start": 189.67999999999998, "end": 191.88, "text": " There are plenty of other tests in the paper."}, {"start": 191.88, "end": 194.67999999999998, "text": " This is just one of many, so make sure to have a look."}, {"start": 194.67999999999998, "end": 201.28, "text": " This is one important stepping stone towards synthesizing singing that is highly usable in digital media"}, {"start": 201.28, "end": 203.88, "text": " and where generation is faster than real time."}, {"start": 203.88, "end": 207.67999999999998, "text": " Creating a MIDI input is a piece of cake with a MIDI master keyboard"}, {"start": 207.68, "end": 213.08, "text": " or we can even draw the notes by hand in many digital audio workstation programs."}, {"start": 213.08, "end": 218.68, "text": " After that, writing the lyrics is as simple as it gets and doesn't need any additional software."}, {"start": 218.68, "end": 223.68, "text": " Tools like this are going to make this process accessible to everyone, loving it."}, {"start": 223.68, "end": 228.88, "text": " If you would like to help us create more elaborate videos, please consider supporting us on Patreon."}, {"start": 228.88, "end": 234.68, "text": " We also support one-time payments through cryptos like Bitcoin, Ethereum and Litecoin."}, {"start": 234.68, "end": 236.88, "text": " Everything is available in the video description."}, {"start": 236.88, "end": 241.07999999999998, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=3yOZxmlBG3Y
Pruning Makes Faster and Smaller Neural Networks | Two Minute Papers #229
The paper "Learning to Prune Filters in Convolutional Neural Networks" is available here: https://arxiv.org/pdf/1801.07365.pdf We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-3064187/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Kato Zsolnai-Fehir. When we are talking about deep learning, we are talking about neural networks that have tens, sometimes hundreds of layers and hundreds of neurons within these layers. This is an enormous number of parameters to train, and clearly there should be some redundancy some duplication in the information within. This paper is trying to throw out many of these neurons of the network without affecting its accuracy too much. This process we shall call pruning, and it helps creating neural networks that are faster and smaller. The accuracy term I used typically means a score on a classification task, in other words, how good this learning algorithm is in telling what an image or video depicts. This particular technique is specialized for pruning convolutional neural networks, where the neurons are in doubt with a small receptive field and are better suited for images. These neurons are also commonly referred to as filters, so here we have to provide a good mathematical definition of a proper pruning. The authors proposed a definition where we can specify a maximum accuracy drop that we deemed to be acceptable, which will be denoted with the letter B in a moment, and the goal is to prune as many filters as we can without going over the specified accuracy loss budget. The pruning process is controlled by an accuracy and efficiency term, and the goal is to have some sort of balance between the two. To get a more visual understanding of what is happening, here the filters you see outlined with the red border are kept by the algorithm, and the rest are discarded. As you can see, the algorithm is not as trivial as many previous approaches that just prune away filters with weaker responses. Here you see the table with the B numbers. Several tests reveal that around a quarter of the filters can be pruned with an accuracy loss of 0.3% and with a higher B we can prune more than 75% of the filters with a loss of around 3%. This is incredible. Image segmentation tasks are about finding the regions that different objects inhabit. Interestingly, when trying the pruning for this task, it not only introduces a minimal loss of accuracy, in some cases the pruned version of the neural network performs even better. How cool is that? And of course, the best part is that we can choose a trade-off that is appropriate for our application. For instance, if we are looking for a light cleanup, we can use the first option at a minimal penalty, or if we wish to have a tiny, tiny neural network that can run on a mobile device, we can look for the more heavy-handed approach by sacrificing just a tiny bit more accuracy. And we have everything in between. There is plenty more validation for the method in the paper, make sure to have a look. It is really great to see that new research works make neural networks not only more powerful over time, but there are efforts in making them smaller and more efficient at the same time. Great news indeed. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.7, "text": " Dear Fellow Scholars, this is two minute papers with Kato Zsolnai-Fehir."}, {"start": 4.7, "end": 8.88, "text": " When we are talking about deep learning, we are talking about neural networks that have"}, {"start": 8.88, "end": 14.120000000000001, "text": " tens, sometimes hundreds of layers and hundreds of neurons within these layers."}, {"start": 14.120000000000001, "end": 19.36, "text": " This is an enormous number of parameters to train, and clearly there should be some redundancy"}, {"start": 19.36, "end": 21.900000000000002, "text": " some duplication in the information within."}, {"start": 21.900000000000002, "end": 26.560000000000002, "text": " This paper is trying to throw out many of these neurons of the network without affecting"}, {"start": 26.560000000000002, "end": 28.46, "text": " its accuracy too much."}, {"start": 28.46, "end": 33.660000000000004, "text": " This process we shall call pruning, and it helps creating neural networks that are faster"}, {"start": 33.660000000000004, "end": 34.660000000000004, "text": " and smaller."}, {"start": 34.660000000000004, "end": 40.260000000000005, "text": " The accuracy term I used typically means a score on a classification task, in other words,"}, {"start": 40.260000000000005, "end": 45.66, "text": " how good this learning algorithm is in telling what an image or video depicts."}, {"start": 45.66, "end": 50.260000000000005, "text": " This particular technique is specialized for pruning convolutional neural networks, where"}, {"start": 50.260000000000005, "end": 56.1, "text": " the neurons are in doubt with a small receptive field and are better suited for images."}, {"start": 56.1, "end": 61.160000000000004, "text": " These neurons are also commonly referred to as filters, so here we have to provide a"}, {"start": 61.160000000000004, "end": 64.72, "text": " good mathematical definition of a proper pruning."}, {"start": 64.72, "end": 70.2, "text": " The authors proposed a definition where we can specify a maximum accuracy drop that we"}, {"start": 70.2, "end": 75.44, "text": " deemed to be acceptable, which will be denoted with the letter B in a moment, and the goal"}, {"start": 75.44, "end": 81.3, "text": " is to prune as many filters as we can without going over the specified accuracy loss budget."}, {"start": 81.3, "end": 86.64, "text": " The pruning process is controlled by an accuracy and efficiency term, and the goal is to have"}, {"start": 86.64, "end": 88.97999999999999, "text": " some sort of balance between the two."}, {"start": 88.97999999999999, "end": 94.22, "text": " To get a more visual understanding of what is happening, here the filters you see outlined"}, {"start": 94.22, "end": 99.25999999999999, "text": " with the red border are kept by the algorithm, and the rest are discarded."}, {"start": 99.25999999999999, "end": 104.34, "text": " As you can see, the algorithm is not as trivial as many previous approaches that just prune"}, {"start": 104.34, "end": 106.86, "text": " away filters with weaker responses."}, {"start": 106.86, "end": 109.9, "text": " Here you see the table with the B numbers."}, {"start": 109.9, "end": 115.10000000000001, "text": " Several tests reveal that around a quarter of the filters can be pruned with an accuracy"}, {"start": 115.10000000000001, "end": 123.34, "text": " loss of 0.3% and with a higher B we can prune more than 75% of the filters with a loss"}, {"start": 123.34, "end": 125.06, "text": " of around 3%."}, {"start": 125.06, "end": 127.06, "text": " This is incredible."}, {"start": 127.06, "end": 132.22, "text": " Image segmentation tasks are about finding the regions that different objects inhabit."}, {"start": 132.22, "end": 136.86, "text": " Interestingly, when trying the pruning for this task, it not only introduces a minimal"}, {"start": 136.86, "end": 142.22000000000003, "text": " loss of accuracy, in some cases the pruned version of the neural network performs even"}, {"start": 142.22000000000003, "end": 143.38000000000002, "text": " better."}, {"start": 143.38000000000002, "end": 144.94000000000003, "text": " How cool is that?"}, {"start": 144.94000000000003, "end": 148.9, "text": " And of course, the best part is that we can choose a trade-off that is appropriate for"}, {"start": 148.9, "end": 150.18, "text": " our application."}, {"start": 150.18, "end": 155.10000000000002, "text": " For instance, if we are looking for a light cleanup, we can use the first option at a minimal"}, {"start": 155.10000000000002, "end": 161.82000000000002, "text": " penalty, or if we wish to have a tiny, tiny neural network that can run on a mobile device,"}, {"start": 161.82, "end": 167.54, "text": " we can look for the more heavy-handed approach by sacrificing just a tiny bit more accuracy."}, {"start": 167.54, "end": 169.54, "text": " And we have everything in between."}, {"start": 169.54, "end": 173.73999999999998, "text": " There is plenty more validation for the method in the paper, make sure to have a look."}, {"start": 173.73999999999998, "end": 178.82, "text": " It is really great to see that new research works make neural networks not only more powerful"}, {"start": 178.82, "end": 183.82, "text": " over time, but there are efforts in making them smaller and more efficient at the same"}, {"start": 183.82, "end": 184.82, "text": " time."}, {"start": 184.82, "end": 185.82, "text": " Great news indeed."}, {"start": 185.82, "end": 192.82, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bdM9c2OFYuw
Google's Text Reader AI: Almost Perfect | Two Minute Papers #228
The paper "Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions" is available here: https://google.github.io/tacotron/publications/tacotron2/index.html https://arxiv.org/abs/1712.05884 Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A Unofficial implementations - proceed with care: https://github.com/candlewill/Tacotron-2 https://github.com/r9y9/wavenet_vocoder We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2875123/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. Earlier, we talked about Google's WaveNet, a learning-based text-to-speech engine. This means that we give it a piece of written text, and after a training step using someone's voice, it has to read it aloud using this person's voice as convincingly as possible. And this follow-up work is about making it even more convincing. Before we go into it, let's marvel at these new results together. Generative adversarial network or variational autoencoder. He has read the whole thing. He reads books. This is really awesome. This is your personal assistant, Google Home. This is your personal assistant, Google Home. The buses aren't the problem, they actually provide a solution. The buses aren't the problem, they actually provide a solution. She sells seashells on the seashore. The shells she sells are seashells, I'm sure. As you can hear, it is great at prosody, stress, and intonation, which leads to really believable human speech. The magic component in the original WaveNet paper was introducing dilated convolutions for this problem. This makes large skips in the input data, so we have a better global view of it. It is a bit like increasing the receptive field of the eye so we can see the entire landscape and not only a tree on a photograph. The magic component in this new work is using mouse spectrograms as an input to WaveNet. This is an intermediate representation that is based on the human perception that records not only how different words should be pronounced, but the expected volumes and intonations as well. The new model was trained on about 24 hours of speech data. And of course, no research work should come without some sort of validation. The first is recording the mean opinion scores for previous algorithms, this one, and real professional voice recordings. The mean opinion score is a number that describes how a sound sample would pass as genuine human speech. The new algorithm passed with flying colors. And even more practical evaluation was also done in the form of a user study where people were listening to the synthesized samples and professional voice narrators and had to guess which one is which. And this is truly incredible because most of the time people had no idea which was which. If you don't believe it, we'll try this ourselves in a moment. A very small but statistically significant tendency towards favoring the real footage was recorded likely because some words like Merlot are mispronounced. Only voiced audiobooks, automatic voice narration for video games. Bring it on! What a time to be alive! Note that producing these waveforms is not real time and still takes quite a while. To progress along that direction, scientists at DeepMind wrote a hack of a paper where they spat a wave nut up a thousand times. Leave a comment if you would like to hear more about it in a future episode. And of course, new inventions like this will also raise new challenges down the line. It may be that voice recordings will become much easier to forge and be less useful as evidence unless we find new measures to verify their authenticity, for instance to sign them like we do with software. In closing, a few audio sample pairs. One of them is real, one of them is synthesized. What do you think? Which is which? Leave a comment below. That girl did a video about Star Wars lipstick. That girl did a video about Star Wars lipstick. She earned a doctorate in sociology at Columbia University. She earned a doctorate in sociology at Columbia University. George Washington was the first president of the United States. George Washington was the first president of the United States. I'm too busy for romance. I'm too busy for romance. I'll just leave a quick hint here that I found on the webpage. Up, there you go. If you have enjoyed this episode, please make sure to support us on Patreon. This is how we can keep the show running, and you know the drill. One dollar is almost nothing, but it keeps the papers coming. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.36, "end": 9.4, "text": " Earlier, we talked about Google's WaveNet, a learning-based text-to-speech engine."}, {"start": 9.4, "end": 14.48, "text": " This means that we give it a piece of written text, and after a training step using someone's"}, {"start": 14.48, "end": 20.080000000000002, "text": " voice, it has to read it aloud using this person's voice as convincingly as possible."}, {"start": 20.080000000000002, "end": 23.64, "text": " And this follow-up work is about making it even more convincing."}, {"start": 23.64, "end": 27.080000000000002, "text": " Before we go into it, let's marvel at these new results together."}, {"start": 27.08, "end": 32.08, "text": " Generative adversarial network or variational autoencoder."}, {"start": 32.08, "end": 34.08, "text": " He has read the whole thing."}, {"start": 34.08, "end": 37.08, "text": " He reads books."}, {"start": 37.08, "end": 40.08, "text": " This is really awesome."}, {"start": 40.08, "end": 45.08, "text": " This is your personal assistant, Google Home."}, {"start": 45.08, "end": 48.08, "text": " This is your personal assistant, Google Home."}, {"start": 48.08, "end": 52.08, "text": " The buses aren't the problem, they actually provide a solution."}, {"start": 52.08, "end": 56.08, "text": " The buses aren't the problem, they actually provide a solution."}, {"start": 56.08, "end": 61.08, "text": " She sells seashells on the seashore."}, {"start": 61.08, "end": 64.08, "text": " The shells she sells are seashells, I'm sure."}, {"start": 64.08, "end": 71.08, "text": " As you can hear, it is great at prosody, stress, and intonation, which leads to really"}, {"start": 71.08, "end": 72.75999999999999, "text": " believable human speech."}, {"start": 72.75999999999999, "end": 77.75999999999999, "text": " The magic component in the original WaveNet paper was introducing dilated convolutions"}, {"start": 77.75999999999999, "end": 79.0, "text": " for this problem."}, {"start": 79.0, "end": 83.72, "text": " This makes large skips in the input data, so we have a better global view of it."}, {"start": 83.72, "end": 88.88, "text": " It is a bit like increasing the receptive field of the eye so we can see the entire landscape"}, {"start": 88.88, "end": 91.2, "text": " and not only a tree on a photograph."}, {"start": 91.2, "end": 97.12, "text": " The magic component in this new work is using mouse spectrograms as an input to WaveNet."}, {"start": 97.12, "end": 102.44, "text": " This is an intermediate representation that is based on the human perception that records"}, {"start": 102.44, "end": 107.92, "text": " not only how different words should be pronounced, but the expected volumes and intonations as"}, {"start": 107.92, "end": 108.92, "text": " well."}, {"start": 108.92, "end": 112.56, "text": " The new model was trained on about 24 hours of speech data."}, {"start": 112.56, "end": 116.88, "text": " And of course, no research work should come without some sort of validation."}, {"start": 116.88, "end": 122.76, "text": " The first is recording the mean opinion scores for previous algorithms, this one, and"}, {"start": 122.76, "end": 125.16, "text": " real professional voice recordings."}, {"start": 125.16, "end": 130.76, "text": " The mean opinion score is a number that describes how a sound sample would pass as genuine"}, {"start": 130.76, "end": 131.96, "text": " human speech."}, {"start": 131.96, "end": 134.92000000000002, "text": " The new algorithm passed with flying colors."}, {"start": 134.92000000000002, "end": 140.12, "text": " And even more practical evaluation was also done in the form of a user study where people"}, {"start": 140.12, "end": 144.56, "text": " were listening to the synthesized samples and professional voice narrators and had to"}, {"start": 144.56, "end": 146.64000000000001, "text": " guess which one is which."}, {"start": 146.64000000000001, "end": 152.12, "text": " And this is truly incredible because most of the time people had no idea which was which."}, {"start": 152.12, "end": 155.36, "text": " If you don't believe it, we'll try this ourselves in a moment."}, {"start": 155.36, "end": 160.16, "text": " A very small but statistically significant tendency towards favoring the real footage"}, {"start": 160.16, "end": 165.8, "text": " was recorded likely because some words like Merlot are mispronounced."}, {"start": 165.8, "end": 170.28, "text": " Only voiced audiobooks, automatic voice narration for video games."}, {"start": 170.28, "end": 171.28, "text": " Bring it on!"}, {"start": 171.28, "end": 173.0, "text": " What a time to be alive!"}, {"start": 173.0, "end": 177.84, "text": " Note that producing these waveforms is not real time and still takes quite a while."}, {"start": 177.84, "end": 183.12, "text": " To progress along that direction, scientists at DeepMind wrote a hack of a paper where"}, {"start": 183.12, "end": 186.04000000000002, "text": " they spat a wave nut up a thousand times."}, {"start": 186.04000000000002, "end": 189.60000000000002, "text": " Leave a comment if you would like to hear more about it in a future episode."}, {"start": 189.60000000000002, "end": 194.20000000000002, "text": " And of course, new inventions like this will also raise new challenges down the line."}, {"start": 194.2, "end": 199.48, "text": " It may be that voice recordings will become much easier to forge and be less useful as"}, {"start": 199.48, "end": 205.07999999999998, "text": " evidence unless we find new measures to verify their authenticity, for instance to sign them"}, {"start": 205.07999999999998, "end": 206.92, "text": " like we do with software."}, {"start": 206.92, "end": 209.64, "text": " In closing, a few audio sample pairs."}, {"start": 209.64, "end": 212.67999999999998, "text": " One of them is real, one of them is synthesized."}, {"start": 212.67999999999998, "end": 213.67999999999998, "text": " What do you think?"}, {"start": 213.67999999999998, "end": 214.67999999999998, "text": " Which is which?"}, {"start": 214.67999999999998, "end": 216.88, "text": " Leave a comment below."}, {"start": 216.88, "end": 220.92, "text": " That girl did a video about Star Wars lipstick."}, {"start": 220.92, "end": 225.0, "text": " That girl did a video about Star Wars lipstick."}, {"start": 225.0, "end": 229.32, "text": " She earned a doctorate in sociology at Columbia University."}, {"start": 229.32, "end": 233.88, "text": " She earned a doctorate in sociology at Columbia University."}, {"start": 233.88, "end": 237.88, "text": " George Washington was the first president of the United States."}, {"start": 237.88, "end": 242.16, "text": " George Washington was the first president of the United States."}, {"start": 242.16, "end": 244.79999999999998, "text": " I'm too busy for romance."}, {"start": 244.79999999999998, "end": 248.39999999999998, "text": " I'm too busy for romance."}, {"start": 248.4, "end": 252.16, "text": " I'll just leave a quick hint here that I found on the webpage."}, {"start": 252.16, "end": 253.8, "text": " Up, there you go."}, {"start": 253.8, "end": 257.56, "text": " If you have enjoyed this episode, please make sure to support us on Patreon."}, {"start": 257.56, "end": 260.72, "text": " This is how we can keep the show running, and you know the drill."}, {"start": 260.72, "end": 264.12, "text": " One dollar is almost nothing, but it keeps the papers coming."}, {"start": 264.12, "end": 284.08, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=pAiiPNg0kDE
SLAC Dataset From MIT and Facebook | Two Minute Papers #227
The paper "SLAC: A Sparsely Labeled Dataset for Action Classification and Localization" is available here: http://slac.csail.mit.edu/ https://arxiv.org/abs/1712.09374 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-3011677/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Ifehir. This project is about the data set created through a joint effort between MIT and Facebook. As it turns out, this data set is way more useful than I initially thought I'll tell you in a moment why. Data sets are used to train and test the quality of learning algorithms. This particular data set contains short video clips. These clips are passed to a neural network which is asked to classify the kind of activity that is taking place in the video. In this data set, there are many cases where everything is given to come to a logical answer that is wrong. We may be in a room with the climbing wall, but exercising is not necessarily happening. We could be around the swimming pool, but swimming is not necessarily happening. I'm pretty sure this has happened to you too. This is a brilliant idea because it is super easy for a neural network to assume that if there is a swimming pool, swimming is probably happening, but it takes a great deal of understanding to actually know what constitutes the swimming part. A few episodes ago, we discussed that this could potentially be a stepping stone towards creating machines that think like humans. Without looking into it, it would be easy to think that creating a data set is basically throwing a bunch of training samples together and calling it a day. I can assure you that this is not the case and that creating a data set like this was a Herculean effort as it contains more than half a million videos and almost 2 million annotations for 200 different activities. And there are plenty of pre-processing steps that one has to perform to make it usable. The collection procedure contains a video crawling step where a large number of videos are obtained from YouTube which are to be de-duplicated, which means removing videos that are too similar to one already contained in the database. A classical case is many different kinds of commentary on the same footage. This amounted to the removal of more than 150,000 videos. Then all of these videos undergo a shot and person detection step where relevant subclips are extracted that contain some kind of human activity. These are then looked at by two different classifiers and depending on whether there was a consensus between the two, a decision is made whether the clip is to be discarded or not. This step helps balancing the ratio of videos where there is some sort of relevant action compared to the clips where there is no relevant action happening. This also makes the negative samples much harder because the context may be correct but the expected activity may not be there. This is the classical hard case with the swimming pool and people in swimming suits twiddling their thumbs instead of swimming. And here comes the more interesting part. When trying to train a neural network for other, loosely related tasks using this dataset for pre-training improves the scores significantly. I'll try to give a little context for the numbers because these numbers are absolutely incredible. There are cases where the success rate is improved by over 30% which speaks for itself. However, there are other cases where the difference is about 10 to 15% that is also remarkable when we are talking about high numbers because the closer the classifier gets to 100% the more difficult the remaining corner cases are that improve the accuracy. In these cases even a 3% improvement is remarkable. And before we go, greetings and best regards to Lucas, the little scholar who seems to be absorbing the papers along with the mother's milk. Excellent, you can start early enough in the pursuit of knowledge. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Ifehir."}, {"start": 4.64, "end": 10.72, "text": " This project is about the data set created through a joint effort between MIT and Facebook."}, {"start": 10.72, "end": 15.280000000000001, "text": " As it turns out, this data set is way more useful than I initially thought I'll tell you"}, {"start": 15.280000000000001, "end": 17.12, "text": " in a moment why."}, {"start": 17.12, "end": 21.400000000000002, "text": " Data sets are used to train and test the quality of learning algorithms."}, {"start": 21.400000000000002, "end": 24.84, "text": " This particular data set contains short video clips."}, {"start": 24.84, "end": 30.04, "text": " These clips are passed to a neural network which is asked to classify the kind of activity"}, {"start": 30.04, "end": 31.92, "text": " that is taking place in the video."}, {"start": 31.92, "end": 37.08, "text": " In this data set, there are many cases where everything is given to come to a logical answer"}, {"start": 37.08, "end": 38.4, "text": " that is wrong."}, {"start": 38.4, "end": 43.760000000000005, "text": " We may be in a room with the climbing wall, but exercising is not necessarily happening."}, {"start": 43.760000000000005, "end": 48.56, "text": " We could be around the swimming pool, but swimming is not necessarily happening."}, {"start": 48.56, "end": 50.879999999999995, "text": " I'm pretty sure this has happened to you too."}, {"start": 50.88, "end": 56.56, "text": " This is a brilliant idea because it is super easy for a neural network to assume that if"}, {"start": 56.56, "end": 62.32, "text": " there is a swimming pool, swimming is probably happening, but it takes a great deal of understanding"}, {"start": 62.32, "end": 65.92, "text": " to actually know what constitutes the swimming part."}, {"start": 65.92, "end": 70.84, "text": " A few episodes ago, we discussed that this could potentially be a stepping stone towards"}, {"start": 70.84, "end": 74.16, "text": " creating machines that think like humans."}, {"start": 74.16, "end": 78.84, "text": " Without looking into it, it would be easy to think that creating a data set is basically"}, {"start": 78.84, "end": 82.44, "text": " throwing a bunch of training samples together and calling it a day."}, {"start": 82.44, "end": 87.08, "text": " I can assure you that this is not the case and that creating a data set like this was"}, {"start": 87.08, "end": 93.16, "text": " a Herculean effort as it contains more than half a million videos and almost 2 million"}, {"start": 93.16, "end": 96.56, "text": " annotations for 200 different activities."}, {"start": 96.56, "end": 101.80000000000001, "text": " And there are plenty of pre-processing steps that one has to perform to make it usable."}, {"start": 101.80000000000001, "end": 107.04, "text": " The collection procedure contains a video crawling step where a large number of videos are obtained"}, {"start": 107.04, "end": 112.72, "text": " from YouTube which are to be de-duplicated, which means removing videos that are too similar"}, {"start": 112.72, "end": 115.76, "text": " to one already contained in the database."}, {"start": 115.76, "end": 120.12, "text": " A classical case is many different kinds of commentary on the same footage."}, {"start": 120.12, "end": 125.08000000000001, "text": " This amounted to the removal of more than 150,000 videos."}, {"start": 125.08000000000001, "end": 130.36, "text": " Then all of these videos undergo a shot and person detection step where relevant subclips"}, {"start": 130.36, "end": 134.16, "text": " are extracted that contain some kind of human activity."}, {"start": 134.16, "end": 139.72, "text": " These are then looked at by two different classifiers and depending on whether there was a consensus"}, {"start": 139.72, "end": 144.88, "text": " between the two, a decision is made whether the clip is to be discarded or not."}, {"start": 144.88, "end": 150.24, "text": " This step helps balancing the ratio of videos where there is some sort of relevant action"}, {"start": 150.24, "end": 153.88, "text": " compared to the clips where there is no relevant action happening."}, {"start": 153.88, "end": 159.24, "text": " This also makes the negative samples much harder because the context may be correct but the expected"}, {"start": 159.24, "end": 161.2, "text": " activity may not be there."}, {"start": 161.2, "end": 165.44, "text": " This is the classical hard case with the swimming pool and people in swimming suits"}, {"start": 165.44, "end": 168.16, "text": " twiddling their thumbs instead of swimming."}, {"start": 168.16, "end": 170.48, "text": " And here comes the more interesting part."}, {"start": 170.48, "end": 175.67999999999998, "text": " When trying to train a neural network for other, loosely related tasks using this dataset"}, {"start": 175.67999999999998, "end": 179.0, "text": " for pre-training improves the scores significantly."}, {"start": 179.0, "end": 184.51999999999998, "text": " I'll try to give a little context for the numbers because these numbers are absolutely incredible."}, {"start": 184.51999999999998, "end": 190.6, "text": " There are cases where the success rate is improved by over 30% which speaks for itself."}, {"start": 190.6, "end": 196.76, "text": " However, there are other cases where the difference is about 10 to 15% that is also remarkable"}, {"start": 196.76, "end": 202.32, "text": " when we are talking about high numbers because the closer the classifier gets to 100% the"}, {"start": 202.32, "end": 206.68, "text": " more difficult the remaining corner cases are that improve the accuracy."}, {"start": 206.68, "end": 210.48, "text": " In these cases even a 3% improvement is remarkable."}, {"start": 210.48, "end": 215.56, "text": " And before we go, greetings and best regards to Lucas, the little scholar who seems to be"}, {"start": 215.56, "end": 218.92, "text": " absorbing the papers along with the mother's milk."}, {"start": 218.92, "end": 222.67999999999998, "text": " Excellent, you can start early enough in the pursuit of knowledge."}, {"start": 222.68, "end": 251.20000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=WhaRsrlaXLk
DeepMind Control Suite | Two Minute Papers #226
The paper "DeepMind Control Suite" and its source code is available here: https://arxiv.org/pdf/1801.00690v1.pdf https://github.com/deepmind/dm_control We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2921430/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This footage that you see here came freshly from Google DeepMind's lab and is about benchmarking reinforcement learning algorithms. Here, you see the classical cardboard swing-up task from this package. As the algorithm starts to play, a score is recorded that indicates how well it is doing and the learner has to choose the appropriate actions depending on the state of the environment to maximize this score. Reinforcement learning is an established research subfield within machine learning with hundreds of papers appearing every year. However, we see that most of them cherry pick a few problems and test against previous works on this very particular selection of tasks. This paper describes a package that is not about the algorithm itself but about helping future research projects to be able to test their results against previous works on an equal footing. This is a great idea which has been addressed earlier by OpenAI with their learning environment by the name Jim. So the first question is, why do we need a new one? The DeepMind Control Suite provides a few differentiating features. One, Jim contains both discrete and continuous tasks where this one is concentrated on continuous problems only. This means that state, time, and action are all continuous which is usually the hallmark of more challenging and lifelike problems. For an algorithm to do well, it has to be able to learn the concept of velocity, acceleration, and other meaningful physical concepts and understand their evolution over time. Two, there are domains where the new control suite is a superset of Jim meaning that it offers equivalent tasks and then some more. And three, the action and reward structures are standardized. This means that the results and learning curves are much more informative and easier to read. This is crucial because research scientists read hundreds of papers every year and this means that they don't necessarily have to look at videos. They immediately have an intuition of how an algorithm works and how it relates to previous techniques just by looking at the learning curve plots. Many tasks also include a much more challenging variant with more sparse rewards. We discussed these sparse rewards in a bit more detail in the previous episode. If you are interested, make sure to click the card on the lower right at the end of this video. The paper also contains an exciting roadmap for future development including quadruped locomotion, multithreaded dynamics, and more. Of course, the whole suite is available free of charge for everyone. The link is available in the description. Super excited to see a deluge of upcoming AI papers and see how they beat the living hell out of each other in 2018. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.14, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.14, "end": 8.540000000000001, "text": " This footage that you see here came freshly from Google DeepMind's lab"}, {"start": 8.540000000000001, "end": 12.08, "text": " and is about benchmarking reinforcement learning algorithms."}, {"start": 12.08, "end": 16.72, "text": " Here, you see the classical cardboard swing-up task from this package."}, {"start": 16.72, "end": 22.18, "text": " As the algorithm starts to play, a score is recorded that indicates how well it is doing"}, {"start": 22.18, "end": 28.8, "text": " and the learner has to choose the appropriate actions depending on the state of the environment to maximize this score."}, {"start": 28.8, "end": 32.02, "text": " Reinforcement learning is an established research subfield"}, {"start": 32.02, "end": 35.94, "text": " within machine learning with hundreds of papers appearing every year."}, {"start": 35.94, "end": 39.54, "text": " However, we see that most of them cherry pick a few problems"}, {"start": 39.54, "end": 43.74, "text": " and test against previous works on this very particular selection of tasks."}, {"start": 43.74, "end": 47.78, "text": " This paper describes a package that is not about the algorithm itself"}, {"start": 47.78, "end": 52.3, "text": " but about helping future research projects to be able to test their results"}, {"start": 52.3, "end": 54.86, "text": " against previous works on an equal footing."}, {"start": 54.86, "end": 62.1, "text": " This is a great idea which has been addressed earlier by OpenAI with their learning environment by the name Jim."}, {"start": 62.1, "end": 65.14, "text": " So the first question is, why do we need a new one?"}, {"start": 65.14, "end": 69.3, "text": " The DeepMind Control Suite provides a few differentiating features."}, {"start": 69.3, "end": 77.34, "text": " One, Jim contains both discrete and continuous tasks where this one is concentrated on continuous problems only."}, {"start": 77.34, "end": 81.62, "text": " This means that state, time, and action are all continuous"}, {"start": 81.62, "end": 85.82000000000001, "text": " which is usually the hallmark of more challenging and lifelike problems."}, {"start": 85.82000000000001, "end": 92.02000000000001, "text": " For an algorithm to do well, it has to be able to learn the concept of velocity, acceleration,"}, {"start": 92.02000000000001, "end": 97.06, "text": " and other meaningful physical concepts and understand their evolution over time."}, {"start": 97.06, "end": 101.54, "text": " Two, there are domains where the new control suite is a superset of Jim"}, {"start": 101.54, "end": 105.5, "text": " meaning that it offers equivalent tasks and then some more."}, {"start": 105.5, "end": 109.38000000000001, "text": " And three, the action and reward structures are standardized."}, {"start": 109.38, "end": 114.61999999999999, "text": " This means that the results and learning curves are much more informative and easier to read."}, {"start": 114.61999999999999, "end": 119.38, "text": " This is crucial because research scientists read hundreds of papers every year"}, {"start": 119.38, "end": 122.46, "text": " and this means that they don't necessarily have to look at videos."}, {"start": 122.46, "end": 126.1, "text": " They immediately have an intuition of how an algorithm works"}, {"start": 126.1, "end": 131.14, "text": " and how it relates to previous techniques just by looking at the learning curve plots."}, {"start": 131.14, "end": 136.14, "text": " Many tasks also include a much more challenging variant with more sparse rewards."}, {"start": 136.14, "end": 140.61999999999998, "text": " We discussed these sparse rewards in a bit more detail in the previous episode."}, {"start": 140.61999999999998, "end": 145.42, "text": " If you are interested, make sure to click the card on the lower right at the end of this video."}, {"start": 145.42, "end": 149.57999999999998, "text": " The paper also contains an exciting roadmap for future development"}, {"start": 149.57999999999998, "end": 153.77999999999997, "text": " including quadruped locomotion, multithreaded dynamics, and more."}, {"start": 153.77999999999997, "end": 157.57999999999998, "text": " Of course, the whole suite is available free of charge for everyone."}, {"start": 157.57999999999998, "end": 159.57999999999998, "text": " The link is available in the description."}, {"start": 159.57999999999998, "end": 163.5, "text": " Super excited to see a deluge of upcoming AI papers"}, {"start": 163.5, "end": 167.98, "text": " and see how they beat the living hell out of each other in 2018."}, {"start": 167.98, "end": 196.14, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=DW1AuOC9TQc
Reinforcement Learning With Noise (OpenAI) | Two Minute Papers #225
The paper "Better Exploration with Parameter Noise" and its source code is available here: https://arxiv.org/abs/1706.01905 https://github.com/openai/baselines The write-up and our Patreon page with the details: https://www.patreon.com/posts/technical-for-16738692 https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2560006/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona-Ifahir. This work is about improving reinforcement learning. Reinforcement learning is a learning algorithm that we can use to choose a set of actions in an environment to maximize a score. Our classical example applications are helicopter control where the score to be maximized would be proportional to the distance that we traveled safely or any computer game of your choice where a score can describe how well we are doing. For instance, in Frostbite, our score describes how many jumps we have survived without dying and this score is subject to maximization. Earlier, scientists at DeepMind combined reinforcement learner with a deep neural network so the algorithm could look at the screen and play the game much like a human player would. This problem is especially difficult when the rewards are sparse. This is similar to what a confused student would experience after a written exam where only one grade is given but the results for the individual problems are not shown. It is quite hard to know where we did well and where we missed the mark and it is much more challenging to choose the appropriate topics to study to do better next time. When starting out, the learner starts exploring the parameter space and performs crazy, seemingly non-sensical actions until it finds a few scenarios where it is able to do well. This can be thought of as adding noise to the actions of the agent. Scientists at OpenAI propose an approach where they add noise not directly to the actions but to the parameters of the agent which results in perturbations that depend on the information that the agent senses. This leads to less flailing and the more systematic exploration that substantially decreases the time taken to learn tasks with sparse rewards. For instance, it makes a profound difference if we use it in the Walker game. As you can see here, the algorithm with the parameter space noise is able to learn the concept of galloping while the traditional method does, well, I am not sure what it is doing to be honest but it is significantly less efficient. The solution does not come without challenges. For instance, different layers respond differently to this added noise and the effect of the noise on the outputs grows over time which requires changing the amount of noise to be added depending on its expected effect on the output. This technique is called Adaptive Noise Scaling. There are plenty of comparisons and other cool details in the paper make sure to have a look it is available in the video description. DeepMind's deep reinforcement learning was published in 2015 with some breathtaking results and superhuman plays on a number of different games and it has already been improved leaves and bounds beyond its initial version. And we are talking about OpenAI so of course the source code of this project is available under the permissive MIT license. In the meantime we have recently been able to upgrade the entire TFR sound recording pipeline through your support on Patreon. I have been yearning for this for a long, long time now and not only that but we could also extend our software pipeline with sound processing units that use AI and work like magic. Quite fitting for the series right? Next up is a recording room or recording corner with acoustic treatment depending on our budget. And thank you for your support it makes a huge difference. A more detailed write up on this is available in the video description. Have a look. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona-Ifahir."}, {"start": 4.88, "end": 7.96, "text": " This work is about improving reinforcement learning."}, {"start": 7.96, "end": 12.64, "text": " Reinforcement learning is a learning algorithm that we can use to choose a set of actions"}, {"start": 12.64, "end": 15.24, "text": " in an environment to maximize a score."}, {"start": 15.24, "end": 20.04, "text": " Our classical example applications are helicopter control where the score to be maximized would"}, {"start": 20.04, "end": 25.68, "text": " be proportional to the distance that we traveled safely or any computer game of your choice"}, {"start": 25.68, "end": 28.72, "text": " where a score can describe how well we are doing."}, {"start": 28.72, "end": 33.96, "text": " For instance, in Frostbite, our score describes how many jumps we have survived without dying"}, {"start": 33.96, "end": 36.84, "text": " and this score is subject to maximization."}, {"start": 36.84, "end": 42.48, "text": " Earlier, scientists at DeepMind combined reinforcement learner with a deep neural network so the"}, {"start": 42.48, "end": 47.44, "text": " algorithm could look at the screen and play the game much like a human player would."}, {"start": 47.44, "end": 51.239999999999995, "text": " This problem is especially difficult when the rewards are sparse."}, {"start": 51.239999999999995, "end": 55.56, "text": " This is similar to what a confused student would experience after a written exam where"}, {"start": 55.56, "end": 60.64, "text": " only one grade is given but the results for the individual problems are not shown."}, {"start": 60.64, "end": 65.12, "text": " It is quite hard to know where we did well and where we missed the mark and it is much"}, {"start": 65.12, "end": 69.96000000000001, "text": " more challenging to choose the appropriate topics to study to do better next time."}, {"start": 69.96000000000001, "end": 75.92, "text": " When starting out, the learner starts exploring the parameter space and performs crazy, seemingly"}, {"start": 75.92, "end": 81.24000000000001, "text": " non-sensical actions until it finds a few scenarios where it is able to do well."}, {"start": 81.24000000000001, "end": 85.52000000000001, "text": " This can be thought of as adding noise to the actions of the agent."}, {"start": 85.52, "end": 90.88, "text": " Scientists at OpenAI propose an approach where they add noise not directly to the actions"}, {"start": 90.88, "end": 96.47999999999999, "text": " but to the parameters of the agent which results in perturbations that depend on the information"}, {"start": 96.47999999999999, "end": 98.16, "text": " that the agent senses."}, {"start": 98.16, "end": 103.44, "text": " This leads to less flailing and the more systematic exploration that substantially decreases the"}, {"start": 103.44, "end": 106.72, "text": " time taken to learn tasks with sparse rewards."}, {"start": 106.72, "end": 111.03999999999999, "text": " For instance, it makes a profound difference if we use it in the Walker game."}, {"start": 111.04, "end": 115.36000000000001, "text": " As you can see here, the algorithm with the parameter space noise is able to learn the"}, {"start": 115.36000000000001, "end": 120.88000000000001, "text": " concept of galloping while the traditional method does, well, I am not sure what it is doing"}, {"start": 120.88000000000001, "end": 124.16000000000001, "text": " to be honest but it is significantly less efficient."}, {"start": 124.16000000000001, "end": 126.76, "text": " The solution does not come without challenges."}, {"start": 126.76, "end": 132.32, "text": " For instance, different layers respond differently to this added noise and the effect of the noise"}, {"start": 132.32, "end": 137.76, "text": " on the outputs grows over time which requires changing the amount of noise to be added depending"}, {"start": 137.76, "end": 140.32, "text": " on its expected effect on the output."}, {"start": 140.32, "end": 143.32, "text": " This technique is called Adaptive Noise Scaling."}, {"start": 143.32, "end": 148.12, "text": " There are plenty of comparisons and other cool details in the paper make sure to have a look"}, {"start": 148.12, "end": 150.68, "text": " it is available in the video description."}, {"start": 150.68, "end": 156.35999999999999, "text": " DeepMind's deep reinforcement learning was published in 2015 with some breathtaking results"}, {"start": 156.35999999999999, "end": 161.64, "text": " and superhuman plays on a number of different games and it has already been improved leaves"}, {"start": 161.64, "end": 164.44, "text": " and bounds beyond its initial version."}, {"start": 164.44, "end": 169.4, "text": " And we are talking about OpenAI so of course the source code of this project is available"}, {"start": 169.4, "end": 171.88, "text": " under the permissive MIT license."}, {"start": 171.88, "end": 176.52, "text": " In the meantime we have recently been able to upgrade the entire TFR sound recording"}, {"start": 176.52, "end": 179.28, "text": " pipeline through your support on Patreon."}, {"start": 179.28, "end": 184.16, "text": " I have been yearning for this for a long, long time now and not only that but we could"}, {"start": 184.16, "end": 189.56, "text": " also extend our software pipeline with sound processing units that use AI and work like"}, {"start": 189.56, "end": 190.64000000000001, "text": " magic."}, {"start": 190.64000000000001, "end": 192.68, "text": " Quite fitting for the series right?"}, {"start": 192.68, "end": 197.6, "text": " Next up is a recording room or recording corner with acoustic treatment depending on our"}, {"start": 197.6, "end": 198.6, "text": " budget."}, {"start": 198.6, "end": 202.07999999999998, "text": " And thank you for your support it makes a huge difference."}, {"start": 202.07999999999998, "end": 205.84, "text": " A more detailed write up on this is available in the video description."}, {"start": 205.84, "end": 206.84, "text": " Have a look."}, {"start": 206.84, "end": 228.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=FMEk8cHF-OA
DeepMind's AI Learns Object Sounds | Two Minute Papers #224
The paper "Objects that Sound" is available here: https://arxiv.org/abs/1712.06651 https://www.youtube.com/watch?v=TFyohksFd48 https://www.youtube.com/watch?v=x_qusr58ruU Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Recommended for you: Look, Listen & Learn - https://www.youtube.com/watch?v=mL3CzZcBJZU We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-756326/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. This work is about creating an AI that can perform audio-visual correspondence. This means two really cool tasks. One, when given a piece of video and audio, it can guess whether they match each other. And two, it can localize the source of the sounds heard in the video. And wait, because this gets even better, as opposed to previous works, here the entire network is trained from scratch and is able to perform cross-model retrieval. Cross-model retrieval means that we are able to give it an input sound and it will be able to find pictures that would produce similar sounds. Or vice versa. For instance, here the input is the sound of a guitar, note the loudspeaker icon in the corner, and it shows us a bunch of either images or sounds that are similar. Marvelous. The training is unsupervised, which means that the algorithm is given a bunch of data and learns without additional labels or instructions. The architecture and results are compared to a previous work by the name Look, Listen, and Learn that we covered earlier in the series, the link is available in the video description. As you can see, both of them run a convolution on your own network. This is one of my favorite parts about deep learning. The very same algorithm is able to process and understand signals of very different kinds, video and audio. The old work concatenates this information and produces a binary yes-no decision whether it thinks the two streams match. This new work tries to produce a number that encodes the distance between the video and the audio. Kind of like the distance between two countries on a map, but both video and audio signals are embedded in the same map. And the output decision always depends on how small or big this distance is. This distance metric is quite useful. If we have an input video or audio signal, choosing other video and audio snippets that have a low distance is one of the important steps that opens up the door to this magical cross-model retrieval. What a time to be alive. Some results are easy to verify, others may spark some debate. For instance, it is quite interesting to see that the algorithm highlights the entirety of the guitar string as a sound source. If you are curious about this mysterious blue image here, make sure to have a look at the paper for an explanation. Now this is a story that we would like to tell to as many people as possible. Everyone needs to hear about this. If you would like to help us with our quest, please consider supporting us on Patreon. You can pick up some cool perks like getting early access to these videos or deciding the order of upcoming episodes. Details are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 10.120000000000001, "text": " This work is about creating an AI that can perform audio-visual correspondence."}, {"start": 10.120000000000001, "end": 12.92, "text": " This means two really cool tasks."}, {"start": 12.92, "end": 36.36, "text": " One, when given a piece of video and audio, it can guess whether they match each other."}, {"start": 36.36, "end": 50.519999999999996, "text": " And two, it can localize the source of the sounds heard in the video."}, {"start": 66.36, "end": 77.72, "text": " And wait, because this gets even better, as opposed to previous works, here the entire"}, {"start": 77.72, "end": 83.03999999999999, "text": " network is trained from scratch and is able to perform cross-model retrieval."}, {"start": 83.03999999999999, "end": 87.84, "text": " Cross-model retrieval means that we are able to give it an input sound and it will be"}, {"start": 87.84, "end": 92.32, "text": " able to find pictures that would produce similar sounds."}, {"start": 92.32, "end": 93.96000000000001, "text": " Or vice versa."}, {"start": 93.96, "end": 98.91999999999999, "text": " For instance, here the input is the sound of a guitar, note the loudspeaker icon in the"}, {"start": 98.91999999999999, "end": 105.08, "text": " corner, and it shows us a bunch of either images or sounds that are similar."}, {"start": 105.08, "end": 106.11999999999999, "text": " Marvelous."}, {"start": 106.11999999999999, "end": 111.39999999999999, "text": " The training is unsupervised, which means that the algorithm is given a bunch of data and"}, {"start": 111.39999999999999, "end": 114.56, "text": " learns without additional labels or instructions."}, {"start": 114.56, "end": 119.63999999999999, "text": " The architecture and results are compared to a previous work by the name Look, Listen,"}, {"start": 119.64, "end": 124.84, "text": " and Learn that we covered earlier in the series, the link is available in the video description."}, {"start": 124.84, "end": 128.64, "text": " As you can see, both of them run a convolution on your own network."}, {"start": 128.64, "end": 131.64, "text": " This is one of my favorite parts about deep learning."}, {"start": 131.64, "end": 138.24, "text": " The very same algorithm is able to process and understand signals of very different kinds,"}, {"start": 138.24, "end": 139.8, "text": " video and audio."}, {"start": 139.8, "end": 145.2, "text": " The old work concatenates this information and produces a binary yes-no decision whether"}, {"start": 145.2, "end": 147.72, "text": " it thinks the two streams match."}, {"start": 147.72, "end": 153.16, "text": " This new work tries to produce a number that encodes the distance between the video and"}, {"start": 153.16, "end": 154.16, "text": " the audio."}, {"start": 154.16, "end": 159.6, "text": " Kind of like the distance between two countries on a map, but both video and audio signals"}, {"start": 159.6, "end": 161.76, "text": " are embedded in the same map."}, {"start": 161.76, "end": 166.56, "text": " And the output decision always depends on how small or big this distance is."}, {"start": 166.56, "end": 168.84, "text": " This distance metric is quite useful."}, {"start": 168.84, "end": 173.96, "text": " If we have an input video or audio signal, choosing other video and audio snippets that"}, {"start": 173.96, "end": 179.12, "text": " have a low distance is one of the important steps that opens up the door to this magical"}, {"start": 179.12, "end": 181.32000000000002, "text": " cross-model retrieval."}, {"start": 181.32000000000002, "end": 182.96, "text": " What a time to be alive."}, {"start": 182.96, "end": 187.08, "text": " Some results are easy to verify, others may spark some debate."}, {"start": 187.08, "end": 191.84, "text": " For instance, it is quite interesting to see that the algorithm highlights the entirety"}, {"start": 191.84, "end": 194.36, "text": " of the guitar string as a sound source."}, {"start": 194.36, "end": 198.52, "text": " If you are curious about this mysterious blue image here, make sure to have a look at"}, {"start": 198.52, "end": 200.60000000000002, "text": " the paper for an explanation."}, {"start": 200.6, "end": 205.92, "text": " Now this is a story that we would like to tell to as many people as possible."}, {"start": 205.92, "end": 207.32, "text": " Everyone needs to hear about this."}, {"start": 207.32, "end": 211.92, "text": " If you would like to help us with our quest, please consider supporting us on Patreon."}, {"start": 211.92, "end": 216.76, "text": " You can pick up some cool perks like getting early access to these videos or deciding"}, {"start": 216.76, "end": 218.88, "text": " the order of upcoming episodes."}, {"start": 218.88, "end": 221.28, "text": " Details are available in the video description."}, {"start": 221.28, "end": 240.24, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=uOiOhVgR3VA
Building Machines That Learn and Think Like People | Two Minute Papers #223
The paper "Building Machines That Learn and Think Like People" is available here: https://arxiv.org/abs/1604.00289 DeepMind's commentary article: https://arxiv.org/ftp/arxiv/papers/1711/1711.08378.pdf One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Frostbite gameplay video source: https://www.youtube.com/watch?v=J2oSbAbcOPg Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2981726/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifei. This paper discusses possible roadmaps towards building machines that are endowed with human like thinking. And before we go into that, the first question would be, is there value in building machines that think like people? Do they really need to think like people? Isn't it a bit agotistical to say, if they are to become any good at this and this task, they have to think like us? And the answer is, well, in some cases, yes. If you remember DeepMind's DeepQ learning algorithm, it was able to play on a superhuman level on 29 out of 49 different Atari games. For instance, it did quite well in breakout, but less so in frostbite. And by frostbite, I mean not the game engine, but the Atari game from 1983 where we need to hop from ice flow to ice flow and construct an igloo. However, we are not meant to jump around arbitrarily, we can gather these pieces by jumping on the active ice flows only, and these are shown with white color. Have a look at this plot. It shows the score it was able to produce as a function of game experience in hours. As you can see, the original DQN is doing quite poorly, while the extended versions of the technique can reach a relatively high score over time. This looks really good. Until we look at the x-axis, because then we see that this takes around 462 hours and the scores plateau afterwards. Well, compared that to humans that can do at least as well, or a bit better, after a mere 2 hours of training. So clearly, there are cases where there is an argument to be made for the usefulness of human like AI. The paper describes several possible directions that may help us achieve this. Two of them is understanding intuitive physics and intuitive psychology. Even young infants understand that objects follow smooth paths and expect liquids to go around barriers. We can try to endow an AI with similar knowledge by feeding it with physics simulations and their evolution over time to get an understanding of similar phenomena. This could be used to augment already existing neural networks and give them a better understanding of the world around us. Active psychology is also present in young infants. They can tell people from objects or distinguish other social and anti-social agents. They can also learn goal-based reasoning quite early. This means that a human who looks at an experienced player play frostbite can easily derive the rules of the game in a matter of minutes. Kind of what we are doing now. Neural networks also have a limited understanding of compositionality and causality, and often perform poorly when describing the content of images that contain previously known objects interacting in novel and scene ways. There are several ways of achieving each of these elements described in the paper. If we manage to build an AI that is endowed with these properties, it may be able to think like humans and, through self-improvement, may achieve the kind of intelligence that we see in all these science fiction movies. There is lots more in the paper, learning to learn approximate models for thinking faster, model-free reinforcement learning, and a nice Q&A section with responses to common questions and criticisms. It is a great read, and it is easy to understand for everyone. I encourage you to have a look at the video description for the link to it. Scientists at Google DeepMind have also written a commentary article where they largely agree with the premises described in this paper, and add some thoughts about the importance of autonomy in building human-like intelligence. Both papers are available in the video description, and both are great reads, so make sure to have a look at them. It is really cool that we have plenty of discussions on potential ways to create a more general intelligence that is at least as potent as humans in a variety of different tasks. What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifei."}, {"start": 4.36, "end": 10.0, "text": " This paper discusses possible roadmaps towards building machines that are endowed with human"}, {"start": 10.0, "end": 11.200000000000001, "text": " like thinking."}, {"start": 11.200000000000001, "end": 16.32, "text": " And before we go into that, the first question would be, is there value in building machines"}, {"start": 16.32, "end": 17.84, "text": " that think like people?"}, {"start": 17.84, "end": 19.84, "text": " Do they really need to think like people?"}, {"start": 19.84, "end": 25.16, "text": " Isn't it a bit agotistical to say, if they are to become any good at this and this task,"}, {"start": 25.16, "end": 27.04, "text": " they have to think like us?"}, {"start": 27.04, "end": 30.48, "text": " And the answer is, well, in some cases, yes."}, {"start": 30.48, "end": 35.84, "text": " If you remember DeepMind's DeepQ learning algorithm, it was able to play on a superhuman"}, {"start": 35.84, "end": 40.64, "text": " level on 29 out of 49 different Atari games."}, {"start": 40.64, "end": 45.4, "text": " For instance, it did quite well in breakout, but less so in frostbite."}, {"start": 45.4, "end": 51.36, "text": " And by frostbite, I mean not the game engine, but the Atari game from 1983 where we need"}, {"start": 51.36, "end": 55.480000000000004, "text": " to hop from ice flow to ice flow and construct an igloo."}, {"start": 55.48, "end": 60.64, "text": " However, we are not meant to jump around arbitrarily, we can gather these pieces by jumping on"}, {"start": 60.64, "end": 64.96, "text": " the active ice flows only, and these are shown with white color."}, {"start": 64.96, "end": 66.16, "text": " Have a look at this plot."}, {"start": 66.16, "end": 71.6, "text": " It shows the score it was able to produce as a function of game experience in hours."}, {"start": 71.6, "end": 77.08, "text": " As you can see, the original DQN is doing quite poorly, while the extended versions of"}, {"start": 77.08, "end": 81.12, "text": " the technique can reach a relatively high score over time."}, {"start": 81.12, "end": 82.75999999999999, "text": " This looks really good."}, {"start": 82.76, "end": 90.36, "text": " Until we look at the x-axis, because then we see that this takes around 462 hours and"}, {"start": 90.36, "end": 92.56, "text": " the scores plateau afterwards."}, {"start": 92.56, "end": 98.04, "text": " Well, compared that to humans that can do at least as well, or a bit better, after a mere"}, {"start": 98.04, "end": 99.88000000000001, "text": " 2 hours of training."}, {"start": 99.88000000000001, "end": 104.36000000000001, "text": " So clearly, there are cases where there is an argument to be made for the usefulness"}, {"start": 104.36000000000001, "end": 105.88000000000001, "text": " of human like AI."}, {"start": 105.88000000000001, "end": 110.36000000000001, "text": " The paper describes several possible directions that may help us achieve this."}, {"start": 110.36, "end": 115.24, "text": " Two of them is understanding intuitive physics and intuitive psychology."}, {"start": 115.24, "end": 121.03999999999999, "text": " Even young infants understand that objects follow smooth paths and expect liquids to go"}, {"start": 121.03999999999999, "end": 122.44, "text": " around barriers."}, {"start": 122.44, "end": 128.36, "text": " We can try to endow an AI with similar knowledge by feeding it with physics simulations and"}, {"start": 128.36, "end": 132.72, "text": " their evolution over time to get an understanding of similar phenomena."}, {"start": 132.72, "end": 137.52, "text": " This could be used to augment already existing neural networks and give them a better understanding"}, {"start": 137.52, "end": 139.36, "text": " of the world around us."}, {"start": 139.36, "end": 142.68, "text": " Active psychology is also present in young infants."}, {"start": 142.68, "end": 148.20000000000002, "text": " They can tell people from objects or distinguish other social and anti-social agents."}, {"start": 148.20000000000002, "end": 151.4, "text": " They can also learn goal-based reasoning quite early."}, {"start": 151.4, "end": 156.64000000000001, "text": " This means that a human who looks at an experienced player play frostbite can easily derive the"}, {"start": 156.64000000000001, "end": 159.32000000000002, "text": " rules of the game in a matter of minutes."}, {"start": 159.32000000000002, "end": 161.4, "text": " Kind of what we are doing now."}, {"start": 161.4, "end": 166.72000000000003, "text": " Neural networks also have a limited understanding of compositionality and causality, and often"}, {"start": 166.72, "end": 172.56, "text": " perform poorly when describing the content of images that contain previously known objects"}, {"start": 172.56, "end": 175.68, "text": " interacting in novel and scene ways."}, {"start": 175.68, "end": 179.96, "text": " There are several ways of achieving each of these elements described in the paper."}, {"start": 179.96, "end": 184.8, "text": " If we manage to build an AI that is endowed with these properties, it may be able to think"}, {"start": 184.8, "end": 189.76, "text": " like humans and, through self-improvement, may achieve the kind of intelligence that"}, {"start": 189.76, "end": 192.48, "text": " we see in all these science fiction movies."}, {"start": 192.48, "end": 198.07999999999998, "text": " There is lots more in the paper, learning to learn approximate models for thinking faster,"}, {"start": 198.07999999999998, "end": 203.92, "text": " model-free reinforcement learning, and a nice Q&A section with responses to common questions"}, {"start": 203.92, "end": 204.92, "text": " and criticisms."}, {"start": 204.92, "end": 208.51999999999998, "text": " It is a great read, and it is easy to understand for everyone."}, {"start": 208.51999999999998, "end": 212.56, "text": " I encourage you to have a look at the video description for the link to it."}, {"start": 212.56, "end": 217.6, "text": " Scientists at Google DeepMind have also written a commentary article where they largely agree"}, {"start": 217.6, "end": 222.84, "text": " with the premises described in this paper, and add some thoughts about the importance of"}, {"start": 222.84, "end": 226.16, "text": " autonomy in building human-like intelligence."}, {"start": 226.16, "end": 230.95999999999998, "text": " Both papers are available in the video description, and both are great reads, so make sure to"}, {"start": 230.95999999999998, "end": 232.12, "text": " have a look at them."}, {"start": 232.12, "end": 237.2, "text": " It is really cool that we have plenty of discussions on potential ways to create a more general"}, {"start": 237.2, "end": 242.76, "text": " intelligence that is at least as potent as humans in a variety of different tasks."}, {"start": 242.76, "end": 244.2, "text": " What a time to be alive!"}, {"start": 244.2, "end": 248.04, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=fTBeNAu18_s
This Autonomous Robot Models Your House Interior | Two Minute Papers #222
The paper "Autonomous Reconstruction of Unknown Indoor Scenes Guided by Time-varying Tensor Fields" and its source code is available here: http://vcc.szu.edu.cn/research/2017/tfnav/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2732939/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. The goal of this work is to have a robot that automatically creates a 3D model of an indoor space, including path planning and controlling attention. Now this immediately sounds like quite a challenging task. This robot uses an RGB-D camera, so beyond the colors it also gets some depth information that describes how far the viewed objects are from the observer. From this information, it tries to create a 3D digital model of these interiors. It not only does that, but it also constantly replants its trajectory based on newly found areas. These paths have to be smooth and walkable without bumping into objects and of course adapt to the topology of the building. You'll see in a moment why even this mundane sounding smooth part is really challenging to accomplish. Spoiler alert, it's about singularities. With this proposed technique, the robot takes a lap in the building and builds a rough representation of it, kind of like a mini-map in your favorite computer games. Previous techniques worked with potential and gradient fields to guide the navigation. The issue with these is that the green dots that you see here represent singularities. These are the generate points that introduce ambiguity to the path planning process and reduce the efficiency of the navigation. The red dots are sinks, which are even worse because they can trap the robot. This new proposed tensor field representation contains a lot fewer singularities and its favorable mathematical properties make it sink free. This leads to much better path planning, which is crucial for maintaining high reconstruction quality. If you have a look at the paper, you'll see several occurrences of the word advection. This is particularly cool because these robot paths are planned in these gradient or tensor fields represent the vertices and flow directions similarly to how fluid flows are computed in simulations, many of which you have seen in this series. Beautiful. Love it. However, as advection can't guarantee that we'll have a full coverage of the building, this proposed technique borrows classic structures from graph theory. Graph theory is used to model the connections between railway stations or to represent people and their relationships in a social network. And here, a method to construct a minimum spanning tree was borrowed to help deciding which direction to take at intersections for optimal coverage with minimal effort. Robots, fluids, graph theory, some of my favorite topics of all time, so you probably know how happy I was to see all these theories come together to create something really practical. This is an amazing paper. Make sure to have a look at it. It is available in the video description. The source code of this project is also available. If you have enjoyed this episode, make sure to subscribe to the series and click the bell icon to never miss an episode. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.5200000000000005, "end": 10.200000000000001, "text": " The goal of this work is to have a robot that automatically creates a 3D model of an indoor"}, {"start": 10.200000000000001, "end": 14.44, "text": " space, including path planning and controlling attention."}, {"start": 14.44, "end": 17.92, "text": " Now this immediately sounds like quite a challenging task."}, {"start": 17.92, "end": 24.6, "text": " This robot uses an RGB-D camera, so beyond the colors it also gets some depth information"}, {"start": 24.6, "end": 28.96, "text": " that describes how far the viewed objects are from the observer."}, {"start": 28.96, "end": 34.0, "text": " From this information, it tries to create a 3D digital model of these interiors."}, {"start": 34.0, "end": 39.44, "text": " It not only does that, but it also constantly replants its trajectory based on newly found"}, {"start": 39.44, "end": 40.44, "text": " areas."}, {"start": 40.44, "end": 45.92, "text": " These paths have to be smooth and walkable without bumping into objects and of course adapt"}, {"start": 45.92, "end": 47.8, "text": " to the topology of the building."}, {"start": 47.8, "end": 52.96, "text": " You'll see in a moment why even this mundane sounding smooth part is really challenging"}, {"start": 52.96, "end": 54.16, "text": " to accomplish."}, {"start": 54.16, "end": 56.8, "text": " Spoiler alert, it's about singularities."}, {"start": 56.8, "end": 61.04, "text": " With this proposed technique, the robot takes a lap in the building and builds a rough"}, {"start": 61.04, "end": 65.72, "text": " representation of it, kind of like a mini-map in your favorite computer games."}, {"start": 65.72, "end": 70.56, "text": " Previous techniques worked with potential and gradient fields to guide the navigation."}, {"start": 70.56, "end": 75.56, "text": " The issue with these is that the green dots that you see here represent singularities."}, {"start": 75.56, "end": 80.03999999999999, "text": " These are the generate points that introduce ambiguity to the path planning process and"}, {"start": 80.03999999999999, "end": 82.52, "text": " reduce the efficiency of the navigation."}, {"start": 82.52, "end": 87.44, "text": " The red dots are sinks, which are even worse because they can trap the robot."}, {"start": 87.44, "end": 92.19999999999999, "text": " This new proposed tensor field representation contains a lot fewer singularities and its"}, {"start": 92.19999999999999, "end": 95.6, "text": " favorable mathematical properties make it sink free."}, {"start": 95.6, "end": 100.28, "text": " This leads to much better path planning, which is crucial for maintaining high reconstruction"}, {"start": 100.28, "end": 101.28, "text": " quality."}, {"start": 101.28, "end": 105.75999999999999, "text": " If you have a look at the paper, you'll see several occurrences of the word advection."}, {"start": 105.75999999999999, "end": 111.16, "text": " This is particularly cool because these robot paths are planned in these gradient or tensor"}, {"start": 111.16, "end": 117.56, "text": " fields represent the vertices and flow directions similarly to how fluid flows are computed"}, {"start": 117.56, "end": 121.28, "text": " in simulations, many of which you have seen in this series."}, {"start": 121.28, "end": 122.28, "text": " Beautiful."}, {"start": 122.28, "end": 123.28, "text": " Love it."}, {"start": 123.28, "end": 127.64, "text": " However, as advection can't guarantee that we'll have a full coverage of the building,"}, {"start": 127.64, "end": 132.07999999999998, "text": " this proposed technique borrows classic structures from graph theory."}, {"start": 132.07999999999998, "end": 137.8, "text": " Graph theory is used to model the connections between railway stations or to represent people"}, {"start": 137.8, "end": 140.48, "text": " and their relationships in a social network."}, {"start": 140.48, "end": 145.44, "text": " And here, a method to construct a minimum spanning tree was borrowed to help deciding which"}, {"start": 145.44, "end": 150.44, "text": " direction to take at intersections for optimal coverage with minimal effort."}, {"start": 150.44, "end": 156.6, "text": " Robots, fluids, graph theory, some of my favorite topics of all time, so you probably know"}, {"start": 156.6, "end": 162.64, "text": " how happy I was to see all these theories come together to create something really practical."}, {"start": 162.64, "end": 164.48, "text": " This is an amazing paper."}, {"start": 164.48, "end": 165.72, "text": " Make sure to have a look at it."}, {"start": 165.72, "end": 167.79999999999998, "text": " It is available in the video description."}, {"start": 167.8, "end": 170.64000000000001, "text": " The source code of this project is also available."}, {"start": 170.64000000000001, "end": 174.48000000000002, "text": " If you have enjoyed this episode, make sure to subscribe to the series and click the bell"}, {"start": 174.48000000000002, "end": 176.4, "text": " icon to never miss an episode."}, {"start": 176.4, "end": 197.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Uo6hFVRsjpA
High-Resolution Neural Texture Synthesis | Two Minute Papers #221
The paper "High-Resolution Multi-Scale Neural Texture Synthesis" and its source code is available here: https://wxs.ca/research/multiscale-neural-synthesis/ Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2929203/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir. Deep learning means that we are working with neural networks that contain many inner layers. As neurons in each layer combine information from the layer before, the deeper we go in these networks, the more elaborate details we're going to see. Let's have a look at an example. For instance, if we train a neural network to recognize images of human faces, first we'll see an edge detector and as a combination of edges, object parts will emerge in the next layer. And in the later layers, a combination of object parts create object models. Neural texture synthesis is about creating lots of new images based on an input texture and these new images have to resemble but not copy the input. Previous works on neural texture synthesis focused on how different features in a given layer relate to the ones before and after it. The issue is that because neurons in convolutional neural networks are in doubt with a small receptive field, they can only look at an input texture at one scale. So for instance, if you look here, you see that with previous techniques, trying to create small scale details in a synthesized texture is going to lead to rather poor results. This new method is about changing the inputs and the outputs of the network to be able to process these images at different scales. These scales range from coarser to finer versions of the same images. Sounds simple enough, right? This simple idea makes all the difference. Here, you can see the input texture and here's the output. As you can see, it has different patterns but has very similar properties to the input. And if we zoom into both of these images, we can see that this one is able to create beautiful, high frequency details as well. Wow, this is some really, really crisp output. Now, it has to be emphasized that this means that the statistical properties of the original image are being mimicked really well. What it doesn't mean is that it takes into consideration the meaning of these images. Just have a look at the synthesized bubbles or the flowers here. The statistical properties of the synthesized textures may be correct, but the semantic meaning of the input is not captured well. In a future work, it would be super useful to extend this algorithm to have a greater understanding of the structure and the symmetries of the input images into consideration. The source code is available under the permissive MIT license, so don't hold back those crazy experiments. If you have enjoyed this episode and you think the series provides you value or entertainment, please consider supporting us on Patreon. One-time payments and cryptocurrencies like Bitcoin or Ethereum are also supported and have been massively successful. I am really out of words. Thank you so much. The details are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.54, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir."}, {"start": 4.54, "end": 10.14, "text": " Deep learning means that we are working with neural networks that contain many inner layers."}, {"start": 10.14, "end": 14.56, "text": " As neurons in each layer combine information from the layer before,"}, {"start": 14.56, "end": 18.98, "text": " the deeper we go in these networks, the more elaborate details we're going to see."}, {"start": 18.98, "end": 20.6, "text": " Let's have a look at an example."}, {"start": 20.6, "end": 25.66, "text": " For instance, if we train a neural network to recognize images of human faces,"}, {"start": 25.66, "end": 32.54, "text": " first we'll see an edge detector and as a combination of edges, object parts will emerge in the next layer."}, {"start": 32.54, "end": 37.68, "text": " And in the later layers, a combination of object parts create object models."}, {"start": 37.68, "end": 43.36, "text": " Neural texture synthesis is about creating lots of new images based on an input texture"}, {"start": 43.36, "end": 47.72, "text": " and these new images have to resemble but not copy the input."}, {"start": 47.72, "end": 53.36, "text": " Previous works on neural texture synthesis focused on how different features in a given layer"}, {"start": 53.36, "end": 56.32, "text": " relate to the ones before and after it."}, {"start": 56.32, "end": 62.62, "text": " The issue is that because neurons in convolutional neural networks are in doubt with a small receptive field,"}, {"start": 62.62, "end": 66.38, "text": " they can only look at an input texture at one scale."}, {"start": 66.38, "end": 70.34, "text": " So for instance, if you look here, you see that with previous techniques,"}, {"start": 70.34, "end": 76.84, "text": " trying to create small scale details in a synthesized texture is going to lead to rather poor results."}, {"start": 76.84, "end": 81.08, "text": " This new method is about changing the inputs and the outputs of the network"}, {"start": 81.08, "end": 84.48, "text": " to be able to process these images at different scales."}, {"start": 84.48, "end": 89.58, "text": " These scales range from coarser to finer versions of the same images."}, {"start": 89.58, "end": 91.82, "text": " Sounds simple enough, right?"}, {"start": 91.82, "end": 94.82, "text": " This simple idea makes all the difference."}, {"start": 94.82, "end": 98.96, "text": " Here, you can see the input texture and here's the output."}, {"start": 98.96, "end": 104.1, "text": " As you can see, it has different patterns but has very similar properties to the input."}, {"start": 104.1, "end": 109.88, "text": " And if we zoom into both of these images, we can see that this one is able to create beautiful,"}, {"start": 109.88, "end": 112.14, "text": " high frequency details as well."}, {"start": 112.14, "end": 116.28, "text": " Wow, this is some really, really crisp output."}, {"start": 116.28, "end": 122.25999999999999, "text": " Now, it has to be emphasized that this means that the statistical properties of the original image"}, {"start": 122.25999999999999, "end": 124.19999999999999, "text": " are being mimicked really well."}, {"start": 124.19999999999999, "end": 129.14, "text": " What it doesn't mean is that it takes into consideration the meaning of these images."}, {"start": 129.14, "end": 132.44, "text": " Just have a look at the synthesized bubbles or the flowers here."}, {"start": 132.44, "end": 136.44, "text": " The statistical properties of the synthesized textures may be correct,"}, {"start": 136.44, "end": 139.92, "text": " but the semantic meaning of the input is not captured well."}, {"start": 139.92, "end": 143.52, "text": " In a future work, it would be super useful to extend this algorithm"}, {"start": 143.52, "end": 149.44, "text": " to have a greater understanding of the structure and the symmetries of the input images into consideration."}, {"start": 149.44, "end": 153.32, "text": " The source code is available under the permissive MIT license,"}, {"start": 153.32, "end": 155.64, "text": " so don't hold back those crazy experiments."}, {"start": 155.64, "end": 160.84, "text": " If you have enjoyed this episode and you think the series provides you value or entertainment,"}, {"start": 160.84, "end": 163.44, "text": " please consider supporting us on Patreon."}, {"start": 163.44, "end": 168.44, "text": " One-time payments and cryptocurrencies like Bitcoin or Ethereum are also supported"}, {"start": 168.44, "end": 170.8, "text": " and have been massively successful."}, {"start": 170.8, "end": 172.64, "text": " I am really out of words."}, {"start": 172.64, "end": 174.2, "text": " Thank you so much."}, {"start": 174.2, "end": 176.76, "text": " The details are available in the video description."}, {"start": 176.76, "end": 196.56, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MCHw6fUyLMY
Efficient Viscoelastic Fluid Simulations | Two Minute Papers #220
The paper "Conformation Constraints for Efficient Viscoelastic Fluid Simulation" is available here: http://www.gmrv.es/Publications/2017/BGAO17/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1958464/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. It has been a while now since we've talked about fluid simulations and now it is time for us to have a look at an amazing technique that creates simulations with viscoelastic fluids plus rigid and deformable bodies. This is a possibility with previous techniques that takes forever to compute and typically involves computational errors that add up over time and lead to a perceptible loss of viscoelasticity. I'll try to explain these terms and what's going on in the moment. There will be lots of eye candy to feast your eyes on throughout the video but I can assure you that this scene takes the cake. The simulation phase here took place at one frame per second which is still not yet fast enough for the much coveted real-time applications, however, is super competitive compared to previous offline techniques that could do this. So what does viscosity mean? Viscosity is the resistance of a fluid against deformation. Water has a low viscosity and rapidly takes the form of the cup we pour it into where honey, ketchup and peanut butter have a higher viscosity and are much more resistant to these external forces. Elasticity is a little less elusive concept that describes to what degree a piece of fluid behaves like elastic solids. However, viscous and elastic are not binary yes or no concepts. There is a continuum between the two and if we have the proper machinery we can create viscoelastic fluid simulations. The tau parameter that you see here controls how viscous or elastic the fluid should be and is referred to in the paper as relaxation time. As tau is increased towards infinity, the friction dominates the internal elastic forces and the polymer won't be able to recover its structure well. The opposite cases where we reduce the tau parameter to 0, then the internal elastic forces will dominate and our polymer will be more rigid. The alpha parameter stands for compliance, which describes the fluidity of the model. Using a lower alpha leads to more solid behavior and higher alpha leads to more fluid behavior. The cool thing is that as a combination of these two parameters we can produce a lot of really cool materials ranging from viscous to elastic to inviscid fluid simulations. Have a look at this honey pouring scene. Hmmm. This simulation uses more than 100,000 particles and 6 of these frames can be simulated in just one second. Wow. If we reduce the number of particles to a few tens of thousands real time human interaction with these simulations also becomes a possibility. A limitation of this technique is that most of our decisions involving the physical properties of the fluid are collapsed into the tau and alpha parameters if we are looking for more esoteric fluid models we should look elsewhere. I am hoping that since part of the algorithm runs on the graphics card the speed of this technique can be further improved in the near future. That would be awesome. Admittedly we've only been scratching the surface so make sure to have a look at the paper for more details. Thanks for watching and for your generous support. See you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.72, "end": 9.48, "text": " It has been a while now since we've talked about fluid simulations and now it is time"}, {"start": 9.48, "end": 14.72, "text": " for us to have a look at an amazing technique that creates simulations with viscoelastic"}, {"start": 14.72, "end": 18.28, "text": " fluids plus rigid and deformable bodies."}, {"start": 18.28, "end": 22.96, "text": " This is a possibility with previous techniques that takes forever to compute and typically"}, {"start": 22.96, "end": 28.44, "text": " involves computational errors that add up over time and lead to a perceptible loss of"}, {"start": 28.44, "end": 29.44, "text": " viscoelasticity."}, {"start": 29.44, "end": 33.120000000000005, "text": " I'll try to explain these terms and what's going on in the moment."}, {"start": 33.120000000000005, "end": 37.68, "text": " There will be lots of eye candy to feast your eyes on throughout the video but I can assure"}, {"start": 37.68, "end": 39.92, "text": " you that this scene takes the cake."}, {"start": 39.92, "end": 45.0, "text": " The simulation phase here took place at one frame per second which is still not yet fast"}, {"start": 45.0, "end": 51.400000000000006, "text": " enough for the much coveted real-time applications, however, is super competitive compared to previous"}, {"start": 51.400000000000006, "end": 53.6, "text": " offline techniques that could do this."}, {"start": 53.6, "end": 55.28, "text": " So what does viscosity mean?"}, {"start": 55.28, "end": 59.28, "text": " Viscosity is the resistance of a fluid against deformation."}, {"start": 59.28, "end": 65.08, "text": " Water has a low viscosity and rapidly takes the form of the cup we pour it into where honey,"}, {"start": 65.08, "end": 70.76, "text": " ketchup and peanut butter have a higher viscosity and are much more resistant to these external"}, {"start": 70.76, "end": 71.76, "text": " forces."}, {"start": 71.76, "end": 76.72, "text": " Elasticity is a little less elusive concept that describes to what degree a piece of fluid"}, {"start": 76.72, "end": 79.0, "text": " behaves like elastic solids."}, {"start": 79.0, "end": 83.76, "text": " However, viscous and elastic are not binary yes or no concepts."}, {"start": 83.76, "end": 88.32000000000001, "text": " There is a continuum between the two and if we have the proper machinery we can create"}, {"start": 88.32000000000001, "end": 90.80000000000001, "text": " viscoelastic fluid simulations."}, {"start": 90.80000000000001, "end": 96.16000000000001, "text": " The tau parameter that you see here controls how viscous or elastic the fluid should be"}, {"start": 96.16000000000001, "end": 99.44, "text": " and is referred to in the paper as relaxation time."}, {"start": 99.44, "end": 104.76, "text": " As tau is increased towards infinity, the friction dominates the internal elastic forces"}, {"start": 104.76, "end": 108.24000000000001, "text": " and the polymer won't be able to recover its structure well."}, {"start": 108.24, "end": 114.36, "text": " The opposite cases where we reduce the tau parameter to 0, then the internal elastic forces"}, {"start": 114.36, "end": 117.56, "text": " will dominate and our polymer will be more rigid."}, {"start": 117.56, "end": 122.6, "text": " The alpha parameter stands for compliance, which describes the fluidity of the model."}, {"start": 122.6, "end": 128.68, "text": " Using a lower alpha leads to more solid behavior and higher alpha leads to more fluid behavior."}, {"start": 128.68, "end": 133.72, "text": " The cool thing is that as a combination of these two parameters we can produce a lot of"}, {"start": 133.72, "end": 140.2, "text": " really cool materials ranging from viscous to elastic to inviscid fluid simulations."}, {"start": 140.2, "end": 143.04, "text": " Have a look at this honey pouring scene."}, {"start": 143.04, "end": 144.04, "text": " Hmmm."}, {"start": 144.04, "end": 150.0, "text": " This simulation uses more than 100,000 particles and 6 of these frames can be simulated"}, {"start": 150.0, "end": 151.8, "text": " in just one second."}, {"start": 151.8, "end": 152.8, "text": " Wow."}, {"start": 152.8, "end": 157.68, "text": " If we reduce the number of particles to a few tens of thousands real time human interaction"}, {"start": 157.68, "end": 160.72, "text": " with these simulations also becomes a possibility."}, {"start": 160.72, "end": 165.35999999999999, "text": " A limitation of this technique is that most of our decisions involving the physical properties"}, {"start": 165.35999999999999, "end": 170.56, "text": " of the fluid are collapsed into the tau and alpha parameters if we are looking for more"}, {"start": 170.56, "end": 173.6, "text": " esoteric fluid models we should look elsewhere."}, {"start": 173.6, "end": 178.2, "text": " I am hoping that since part of the algorithm runs on the graphics card the speed of this"}, {"start": 178.2, "end": 181.32, "text": " technique can be further improved in the near future."}, {"start": 181.32, "end": 182.64, "text": " That would be awesome."}, {"start": 182.64, "end": 186.44, "text": " Admittedly we've only been scratching the surface so make sure to have a look at the"}, {"start": 186.44, "end": 188.0, "text": " paper for more details."}, {"start": 188.0, "end": 190.64, "text": " Thanks for watching and for your generous support."}, {"start": 190.64, "end": 218.27999999999997, "text": " See you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_BPJFFkxSbw
Deep Image Prior | Two Minute Papers #219
The paper "Deep Image Prior" and its source code is available here: https://dmitryulyanov.github.io/deep_image_prior Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://flic.kr/p/cfB62s Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejeona Ifehir. This work is about performing useful image restoration tasks with a convolution on your own network with an additional twist. Its main use cases are as follows. One, in the case of JPEG artifact removal, the input is this image with many blocky artifacts that materialize during compression and the output is a restored version of this image. Two, image inpainting, where some regions of the input image are missing and are to be filled with useful and hopefully plausible information. Three, super resolution, where the input image is intact but is very coarse and has low resolution, and the output should be a more detailed, higher resolution version of the same image. This is the classic enhanced scenario from the CSI-TV series. It is typically hard to do because there is a stupendously large number of possible high-resolution image solutions that we could come up with as an output. Four, image denoising is also a possibility. The standard way of doing these is that we train such a network on a large database of images so that they can learn the concept of many object classes such as humans, animals and more and also the typical features and motives that are used to construct such images. These networks have some sort of understanding of these images and hence can perform these operations better than most handcrafted algorithms. So let's have a look at some comparisons. Do you see these bold lettered labels that classify these algorithms as trained or untrained? The bi-cubic interpolation is a classic untrained algorithm that almost naively tries to guess the pixel colors by averaging its neighbors. This is clearly untrained because it does not take a database of images to learn on. Understandably, the fact that these results are leg-luster is to show that non-learning based algorithms are not great at this. The SR-ResNet is a state-of-the-art learning-based technique for super-resolution that was trained on a large database of input images. It is clearly doing way better than bi-cubic interpolation. And look, we have this deep prior algorithm that performs comparably well but is labeled to be untrained. So what is going on here? And here comes the twist. This convolutional neural network is actually untrained. This means that the neural weights are randomly initialized which generally leads to completely useless results on most problems. So no aspect of this network works through the data it has learned on, all the required information is contained within the structure of the network itself. We all know that the structure of these neural networks matter a great deal, but in this case it is shown that it is at least as important as the training data itself. A very interesting and esoteric idea indeed, please make sure to have a look at the paper for details as there are many details to be understood to get a more complete view of this conclusion. In the comparisons, beyond the images, researchers often publish this PSNR number that you see for each image. This is the peak signal to noise ratio, which means how close the output image is to the ground truth and this number is of course always subject to maximization. Remarkably this untrained network performs well on both images with natural patterns and man-made objects. Reconstruction from a pair of flash and no flash photography images is also a possibility and the algorithm does not contain the light leaks produced by a highly competitive handcrafted algorithm, a joint bilateral filter. Quite remarkable indeed. The supplementary materials and the project website contain a ton of comparisons against competing techniques, so make sure to have a look at that if you would like to know more. The source code of this project is available under the permissive Apache 2.0 license. If you have enjoyed this episode and you feel that 8 of these videos a month is worth a dollar, please consider supporting us on Patreon. One dollar is almost nothing, but it keeps the papers coming. Recently we have also added the possibility of one-time payments through PayPal and Cryptocurrencies. I was stunned to see how generous our crypto-loving fellow scholars are. Since most of these crypto donations are anonymous and it is not possible to say thank you to everyone individually, I would like to say a huge thanks to everyone who supports the series and this applies to everyone regardless of contribution, just watching the series and spreading the word is already a great deal of help for us. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejeona Ifehir."}, {"start": 4.6000000000000005, "end": 9.8, "text": " This work is about performing useful image restoration tasks with a convolution on your own"}, {"start": 9.8, "end": 12.52, "text": " network with an additional twist."}, {"start": 12.52, "end": 15.16, "text": " Its main use cases are as follows."}, {"start": 15.16, "end": 21.88, "text": " One, in the case of JPEG artifact removal, the input is this image with many blocky artifacts"}, {"start": 21.88, "end": 27.240000000000002, "text": " that materialize during compression and the output is a restored version of this image."}, {"start": 27.24, "end": 32.28, "text": " Two, image inpainting, where some regions of the input image are missing and are to be"}, {"start": 32.28, "end": 36.16, "text": " filled with useful and hopefully plausible information."}, {"start": 36.16, "end": 41.599999999999994, "text": " Three, super resolution, where the input image is intact but is very coarse and has low"}, {"start": 41.599999999999994, "end": 46.68, "text": " resolution, and the output should be a more detailed, higher resolution version of the"}, {"start": 46.68, "end": 47.84, "text": " same image."}, {"start": 47.84, "end": 52.2, "text": " This is the classic enhanced scenario from the CSI-TV series."}, {"start": 52.2, "end": 57.800000000000004, "text": " It is typically hard to do because there is a stupendously large number of possible high-resolution"}, {"start": 57.800000000000004, "end": 61.28, "text": " image solutions that we could come up with as an output."}, {"start": 61.28, "end": 64.68, "text": " Four, image denoising is also a possibility."}, {"start": 64.68, "end": 69.60000000000001, "text": " The standard way of doing these is that we train such a network on a large database of"}, {"start": 69.60000000000001, "end": 75.80000000000001, "text": " images so that they can learn the concept of many object classes such as humans, animals"}, {"start": 75.80000000000001, "end": 81.80000000000001, "text": " and more and also the typical features and motives that are used to construct such images."}, {"start": 81.8, "end": 86.6, "text": " These networks have some sort of understanding of these images and hence can perform these"}, {"start": 86.6, "end": 90.24, "text": " operations better than most handcrafted algorithms."}, {"start": 90.24, "end": 92.28, "text": " So let's have a look at some comparisons."}, {"start": 92.28, "end": 98.24, "text": " Do you see these bold lettered labels that classify these algorithms as trained or untrained?"}, {"start": 98.24, "end": 104.24, "text": " The bi-cubic interpolation is a classic untrained algorithm that almost naively tries to guess"}, {"start": 104.24, "end": 107.44, "text": " the pixel colors by averaging its neighbors."}, {"start": 107.44, "end": 112.28, "text": " This is clearly untrained because it does not take a database of images to learn on."}, {"start": 112.28, "end": 117.24, "text": " Understandably, the fact that these results are leg-luster is to show that non-learning"}, {"start": 117.24, "end": 119.92, "text": " based algorithms are not great at this."}, {"start": 119.92, "end": 126.08, "text": " The SR-ResNet is a state-of-the-art learning-based technique for super-resolution that was trained"}, {"start": 126.08, "end": 128.4, "text": " on a large database of input images."}, {"start": 128.4, "end": 132.36, "text": " It is clearly doing way better than bi-cubic interpolation."}, {"start": 132.36, "end": 138.64000000000001, "text": " And look, we have this deep prior algorithm that performs comparably well but is labeled"}, {"start": 138.64000000000001, "end": 140.08, "text": " to be untrained."}, {"start": 140.08, "end": 141.8, "text": " So what is going on here?"}, {"start": 141.8, "end": 143.24, "text": " And here comes the twist."}, {"start": 143.24, "end": 147.04000000000002, "text": " This convolutional neural network is actually untrained."}, {"start": 147.04000000000002, "end": 152.28000000000003, "text": " This means that the neural weights are randomly initialized which generally leads to completely"}, {"start": 152.28000000000003, "end": 154.64000000000001, "text": " useless results on most problems."}, {"start": 154.64000000000001, "end": 159.48000000000002, "text": " So no aspect of this network works through the data it has learned on, all the required"}, {"start": 159.48, "end": 163.67999999999998, "text": " information is contained within the structure of the network itself."}, {"start": 163.67999999999998, "end": 167.83999999999997, "text": " We all know that the structure of these neural networks matter a great deal, but in this"}, {"start": 167.83999999999997, "end": 173.16, "text": " case it is shown that it is at least as important as the training data itself."}, {"start": 173.16, "end": 177.79999999999998, "text": " A very interesting and esoteric idea indeed, please make sure to have a look at the paper"}, {"start": 177.79999999999998, "end": 182.39999999999998, "text": " for details as there are many details to be understood to get a more complete view"}, {"start": 182.39999999999998, "end": 183.72, "text": " of this conclusion."}, {"start": 183.72, "end": 189.44, "text": " In the comparisons, beyond the images, researchers often publish this PSNR number that you see"}, {"start": 189.44, "end": 190.72, "text": " for each image."}, {"start": 190.72, "end": 196.28, "text": " This is the peak signal to noise ratio, which means how close the output image is to the"}, {"start": 196.28, "end": 201.16, "text": " ground truth and this number is of course always subject to maximization."}, {"start": 201.16, "end": 207.4, "text": " Remarkably this untrained network performs well on both images with natural patterns and"}, {"start": 207.4, "end": 208.92, "text": " man-made objects."}, {"start": 208.92, "end": 214.52, "text": " Reconstruction from a pair of flash and no flash photography images is also a possibility"}, {"start": 214.52, "end": 219.60000000000002, "text": " and the algorithm does not contain the light leaks produced by a highly competitive handcrafted"}, {"start": 219.60000000000002, "end": 222.48000000000002, "text": " algorithm, a joint bilateral filter."}, {"start": 222.48000000000002, "end": 223.88, "text": " Quite remarkable indeed."}, {"start": 223.88, "end": 228.72, "text": " The supplementary materials and the project website contain a ton of comparisons against"}, {"start": 228.72, "end": 232.92000000000002, "text": " competing techniques, so make sure to have a look at that if you would like to know more."}, {"start": 232.92000000000002, "end": 238.44, "text": " The source code of this project is available under the permissive Apache 2.0 license."}, {"start": 238.44, "end": 242.68, "text": " If you have enjoyed this episode and you feel that 8 of these videos a month is worth a"}, {"start": 242.68, "end": 246.12, "text": " dollar, please consider supporting us on Patreon."}, {"start": 246.12, "end": 249.72, "text": " One dollar is almost nothing, but it keeps the papers coming."}, {"start": 249.72, "end": 255.8, "text": " Recently we have also added the possibility of one-time payments through PayPal and Cryptocurrencies."}, {"start": 255.8, "end": 260.24, "text": " I was stunned to see how generous our crypto-loving fellow scholars are."}, {"start": 260.24, "end": 264.72, "text": " Since most of these crypto donations are anonymous and it is not possible to say thank you to"}, {"start": 264.72, "end": 269.32, "text": " everyone individually, I would like to say a huge thanks to everyone who supports the"}, {"start": 269.32, "end": 274.44, "text": " series and this applies to everyone regardless of contribution, just watching the series"}, {"start": 274.44, "end": 277.71999999999997, "text": " and spreading the word is already a great deal of help for us."}, {"start": 277.72, "end": 299.44000000000005, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=zjaz2mC1KhM
Distilling Neural Networks | Two Minute Papers #218
The paper "Distilling a Neural Network Into a Soft Decision Tree" is available here: https://arxiv.org/pdf/1711.09784.pdf Decision Trees and Boosting, XGBoost: https://www.youtube.com/watch?v=0Xc9LIb_HTw We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payment links are available below. Thank you very much for your generous support! PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A Decision tree image sources: 1. https://github.com/SilverDecisions/SilverDecisions/wiki/Gallery 2. https://commons.wikimedia.org/wiki/File:Decision_tree_model.png Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://flic.kr/p/gjWHVF Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #distillation
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Since the latest explosion in AI research, virtually no field of science remains untouched by neural networks. These are amazing tools that help us solve problems where the solutions are easy to identify but difficult to explain. For instance, we all know a backflip when we see one, but mathematically defining all the required forces, rotations and torque is much more challenging. Neural networks excel at these kinds of tasks provided that we can supply them a large number of training samples. If we pick inside these neural networks, we see more and more layers and more and more neurons within these layers as years go by. The final decision depends on what neurons are activated by our inputs. They are highly efficient, however, trying to understand how a decision is being made by these networks is going to be a fruitless endeavor. This is especially troublesome when the network gives us a wrong answer that we, without having access to any sort of explanation, may erroneously accept without proper consideration. This piece of work is about distillation, which means that we take a neural network and try to express its inner workings in the form of a decision tree. Decision trees take into consideration a series of variables and provide a clear roadmap towards the decision based on them. For instance, they are useful in using the age and the amount of free time of people to try to guess whether they are likely to play video games or deciding who should get a loan from the bank based on their age, occupation and income. Yeah, this sounds great. However, the main issue is that decision trees are not good substitutes for neural networks. The theory says that we have a generalization versus interpretability trade off situation, which means that trees that provide us good decisions overfit the training data and generalize poorly and the ones that are easy to interpret are inaccurate. So in order to break out of this trade off situation, a key idea of this piece of work is to ask the neural network to build a decision tree by taking an input dataset for training, trying to generate more training data that follows the same properties and feed all this to the decision tree. Here are some results for the classical problem of identifying digits in the MNIST dataset. As each decision is meant to cut the number of output options in half, it shows really well that we can very effectively perform the classification in only four decisions from a given input. And not only that, but it also shows what it is looking for. For instance, here we can see that the final decision before we conclude whether the input number is a 3 or 8, it looks for the presence of a tiny area that joins the ends of the 3 to make an 8. A different visualization of the Connect 4 game dataset reveals that the neural network quickly tries to distinguish two types of strategies. One, where the players start playing on the inner and one with the outer region of the board. It is shown that these trees perform better than traditional decision trees. Once more, they are only slightly worse than the corresponding neural networks, but can explain their decisions much more clearly and are also faster. In summary, the rate of progress in machine learning research is truly insane these days, and I am all for papers that try to provide us a greater understanding of what is happening under the hood. I am loving this idea in particular. We had an earlier episode on how to supercharge these decision trees via tree boosting. If you are interested in learning more about it, the link is available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.48, "end": 10.120000000000001, "text": " Since the latest explosion in AI research, virtually no field of science remains untouched"}, {"start": 10.120000000000001, "end": 11.56, "text": " by neural networks."}, {"start": 11.56, "end": 17.04, "text": " These are amazing tools that help us solve problems where the solutions are easy to identify"}, {"start": 17.04, "end": 18.68, "text": " but difficult to explain."}, {"start": 18.68, "end": 23.16, "text": " For instance, we all know a backflip when we see one, but mathematically defining all the"}, {"start": 23.16, "end": 27.92, "text": " required forces, rotations and torque is much more challenging."}, {"start": 27.92, "end": 33.36, "text": " Neural networks excel at these kinds of tasks provided that we can supply them a large number"}, {"start": 33.36, "end": 34.68, "text": " of training samples."}, {"start": 34.68, "end": 38.96, "text": " If we pick inside these neural networks, we see more and more layers and more and more"}, {"start": 38.96, "end": 41.800000000000004, "text": " neurons within these layers as years go by."}, {"start": 41.800000000000004, "end": 46.2, "text": " The final decision depends on what neurons are activated by our inputs."}, {"start": 46.2, "end": 51.24, "text": " They are highly efficient, however, trying to understand how a decision is being made by"}, {"start": 51.24, "end": 54.28, "text": " these networks is going to be a fruitless endeavor."}, {"start": 54.28, "end": 59.04, "text": " This is especially troublesome when the network gives us a wrong answer that we, without"}, {"start": 59.04, "end": 64.96000000000001, "text": " having access to any sort of explanation, may erroneously accept without proper consideration."}, {"start": 64.96000000000001, "end": 69.56, "text": " This piece of work is about distillation, which means that we take a neural network and"}, {"start": 69.56, "end": 74.4, "text": " try to express its inner workings in the form of a decision tree."}, {"start": 74.4, "end": 79.0, "text": " Decision trees take into consideration a series of variables and provide a clear roadmap"}, {"start": 79.0, "end": 81.24000000000001, "text": " towards the decision based on them."}, {"start": 81.24, "end": 85.96, "text": " For instance, they are useful in using the age and the amount of free time of people to"}, {"start": 85.96, "end": 90.88, "text": " try to guess whether they are likely to play video games or deciding who should get"}, {"start": 90.88, "end": 95.19999999999999, "text": " a loan from the bank based on their age, occupation and income."}, {"start": 95.19999999999999, "end": 97.32, "text": " Yeah, this sounds great."}, {"start": 97.32, "end": 102.72, "text": " However, the main issue is that decision trees are not good substitutes for neural networks."}, {"start": 102.72, "end": 108.67999999999999, "text": " The theory says that we have a generalization versus interpretability trade off situation,"}, {"start": 108.68, "end": 113.44000000000001, "text": " which means that trees that provide us good decisions overfit the training data and"}, {"start": 113.44000000000001, "end": 118.4, "text": " generalize poorly and the ones that are easy to interpret are inaccurate."}, {"start": 118.4, "end": 123.48, "text": " So in order to break out of this trade off situation, a key idea of this piece of work"}, {"start": 123.48, "end": 128.68, "text": " is to ask the neural network to build a decision tree by taking an input dataset for"}, {"start": 128.68, "end": 134.88, "text": " training, trying to generate more training data that follows the same properties and feed"}, {"start": 134.88, "end": 136.92000000000002, "text": " all this to the decision tree."}, {"start": 136.92, "end": 142.28, "text": " Here are some results for the classical problem of identifying digits in the MNIST dataset."}, {"start": 142.28, "end": 147.44, "text": " As each decision is meant to cut the number of output options in half, it shows really"}, {"start": 147.44, "end": 152.76, "text": " well that we can very effectively perform the classification in only four decisions from"}, {"start": 152.76, "end": 153.95999999999998, "text": " a given input."}, {"start": 153.95999999999998, "end": 157.64, "text": " And not only that, but it also shows what it is looking for."}, {"start": 157.64, "end": 162.6, "text": " For instance, here we can see that the final decision before we conclude whether the input"}, {"start": 162.6, "end": 169.24, "text": " number is a 3 or 8, it looks for the presence of a tiny area that joins the ends of the"}, {"start": 169.24, "end": 171.12, "text": " 3 to make an 8."}, {"start": 171.12, "end": 176.56, "text": " A different visualization of the Connect 4 game dataset reveals that the neural network"}, {"start": 176.56, "end": 180.07999999999998, "text": " quickly tries to distinguish two types of strategies."}, {"start": 180.07999999999998, "end": 185.07999999999998, "text": " One, where the players start playing on the inner and one with the outer region of the"}, {"start": 185.07999999999998, "end": 186.07999999999998, "text": " board."}, {"start": 186.07999999999998, "end": 190.48, "text": " It is shown that these trees perform better than traditional decision trees."}, {"start": 190.48, "end": 195.04, "text": " Once more, they are only slightly worse than the corresponding neural networks, but can"}, {"start": 195.04, "end": 200.0, "text": " explain their decisions much more clearly and are also faster."}, {"start": 200.0, "end": 205.2, "text": " In summary, the rate of progress in machine learning research is truly insane these days,"}, {"start": 205.2, "end": 210.12, "text": " and I am all for papers that try to provide us a greater understanding of what is happening"}, {"start": 210.12, "end": 211.12, "text": " under the hood."}, {"start": 211.12, "end": 213.48, "text": " I am loving this idea in particular."}, {"start": 213.48, "end": 218.76, "text": " We had an earlier episode on how to supercharge these decision trees via tree boosting."}, {"start": 218.76, "end": 223.12, "text": " If you are interested in learning more about it, the link is available in the video description."}, {"start": 223.12, "end": 251.64000000000001, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XhH2Cc4thJw
AI Learns Semantic Image Manipulation | Two Minute Papers #217
The paper "High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs" and its source code is available here: https://tcwang0509.github.io/pix2pixHD/ Openings at our Institute. Make sure to mention to the contact person that you found this through Two Minute Papers! https://www.cg.tuwien.ac.at/jobs/3dspatialization/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payments: PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1721451/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejonei Fahir. This technique is about creating high-resolution images from semantic maps. A semantic map is a colorful image where each of the colors denote an object class, such as pedestrians, cars, traffic signs and lights, buildings and so on. Normally, we use light simulation programs or restoration to render such an image, but AI researchers ask the question, why do we even need a renderer if we can code up a learning algorithm that synthesizes the images by itself? Whoa! This generative adversarial network takes this input semantic map and synthesizes a high resolution photorealistic image from it. Previous techniques were mostly capable of creating coarser, lower resolution images and also they were rarely photorealistic. And get this, this one produces 2K by 1K pixel outputs, which is close to full HD in terms of pixel count. If we wish to change something in a photorealistic image, we likely need a graphic designer and lots of expertise in Photoshop and similar tools. In the end, even simpler edits are very laborious to make because the human eye is very difficult to fool. An advantage of working with these semantic maps is that they are super easy to edit without any expertise. For instance, we can exert control on the outputs by choosing from a number of different possible options to fill the labels. These are often not just risky versions of the same car or road, but can represent a vastly different solution by changing the material of the road from concrete to dirt. Or, it is super easy to replace trees with buildings, all we have to do is rename the labels in the input image. These results are not restricted to outdoor traffic images, individual parts of human faces are also editable. For instance, adding a mustache has never been easier. The results are compared to a previous technique by the name Pix2Pix and against Cascaded Refinement Networks. You can see that the quality of the outputs vastly outperforms both of them and the images are also a visibly higher resolution. It is quite interesting to say that these are previous work because both of these papers came out this year. For instance, our episode on Pix2Pix came nine months ago and it has already been improved by a significant margin. The joys of machine learning research. Part of the trick is that the semantic map is not only used by itself, but a boundary map is also created to encourage the algorithm to create outputs with better segmentation. This boundary information turned out to be just as useful as the labels themselves. Part trick is to create multiple discriminator networks and run them on a variety of course to find scale images. There is much, much more in the paper, make sure to have a look for more details. Since it is difficult to mathematically evaluate the quality of these images, a user study was carried out in the paper. In the end, if we take a practical mindset, these tools are to be used by artists and it is reasonable to say that whichever one is favored by humans should be accepted as a superior method for now. This tool is going to be a complete powerhouse for artists in the industry. And by this, I mean right now, because the source code of this project is available to everyone, free of charge. In the meantime, we have an opening at our institute at the Vienna University of Technology for one PhD student and one postdoc. The link is available in the video description, read it carefully to make sure you qualify and if you apply through the email address of Professor Mikhail Vima. Make sure to mention two minute papers in your message. This is an excellent opportunity to turn your life around, live in an amazing city, learn a lot and write amazing papers. It doesn't get any better than that. That line is end of January. Thanks for watching and for your generous support and I'll see you next year.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejonei Fahir."}, {"start": 4.64, "end": 9.120000000000001, "text": " This technique is about creating high-resolution images from semantic maps."}, {"start": 9.120000000000001, "end": 14.4, "text": " A semantic map is a colorful image where each of the colors denote an object class, such"}, {"start": 14.4, "end": 19.48, "text": " as pedestrians, cars, traffic signs and lights, buildings and so on."}, {"start": 19.48, "end": 24.6, "text": " Normally, we use light simulation programs or restoration to render such an image, but"}, {"start": 24.6, "end": 30.32, "text": " AI researchers ask the question, why do we even need a renderer if we can code up a learning"}, {"start": 30.32, "end": 34.24, "text": " algorithm that synthesizes the images by itself?"}, {"start": 34.24, "end": 35.24, "text": " Whoa!"}, {"start": 35.24, "end": 40.160000000000004, "text": " This generative adversarial network takes this input semantic map and synthesizes a high"}, {"start": 40.160000000000004, "end": 43.0, "text": " resolution photorealistic image from it."}, {"start": 43.0, "end": 47.84, "text": " Previous techniques were mostly capable of creating coarser, lower resolution images"}, {"start": 47.84, "end": 50.32, "text": " and also they were rarely photorealistic."}, {"start": 50.32, "end": 57.32, "text": " And get this, this one produces 2K by 1K pixel outputs, which is close to full HD in terms"}, {"start": 57.32, "end": 58.64, "text": " of pixel count."}, {"start": 58.64, "end": 63.72, "text": " If we wish to change something in a photorealistic image, we likely need a graphic designer and"}, {"start": 63.72, "end": 67.28, "text": " lots of expertise in Photoshop and similar tools."}, {"start": 67.28, "end": 72.68, "text": " In the end, even simpler edits are very laborious to make because the human eye is very difficult"}, {"start": 72.68, "end": 73.68, "text": " to fool."}, {"start": 73.68, "end": 78.24000000000001, "text": " An advantage of working with these semantic maps is that they are super easy to edit without"}, {"start": 78.24000000000001, "end": 79.64, "text": " any expertise."}, {"start": 79.64, "end": 84.28, "text": " For instance, we can exert control on the outputs by choosing from a number of different"}, {"start": 84.28, "end": 86.96000000000001, "text": " possible options to fill the labels."}, {"start": 86.96000000000001, "end": 92.12, "text": " These are often not just risky versions of the same car or road, but can represent a"}, {"start": 92.12, "end": 97.68, "text": " vastly different solution by changing the material of the road from concrete to dirt."}, {"start": 97.68, "end": 102.8, "text": " Or, it is super easy to replace trees with buildings, all we have to do is rename the"}, {"start": 102.8, "end": 104.76, "text": " labels in the input image."}, {"start": 104.76, "end": 110.44, "text": " These results are not restricted to outdoor traffic images, individual parts of human faces"}, {"start": 110.44, "end": 111.84, "text": " are also editable."}, {"start": 111.84, "end": 115.24000000000001, "text": " For instance, adding a mustache has never been easier."}, {"start": 115.24000000000001, "end": 121.08000000000001, "text": " The results are compared to a previous technique by the name Pix2Pix and against Cascaded Refinement"}, {"start": 121.08000000000001, "end": 122.08000000000001, "text": " Networks."}, {"start": 122.08000000000001, "end": 127.28, "text": " You can see that the quality of the outputs vastly outperforms both of them and the images"}, {"start": 127.28, "end": 130.04000000000002, "text": " are also a visibly higher resolution."}, {"start": 130.04000000000002, "end": 134.6, "text": " It is quite interesting to say that these are previous work because both of these papers"}, {"start": 134.6, "end": 136.16, "text": " came out this year."}, {"start": 136.16, "end": 141.84, "text": " For instance, our episode on Pix2Pix came nine months ago and it has already been improved"}, {"start": 141.84, "end": 143.68, "text": " by a significant margin."}, {"start": 143.68, "end": 146.16, "text": " The joys of machine learning research."}, {"start": 146.16, "end": 150.79999999999998, "text": " Part of the trick is that the semantic map is not only used by itself, but a boundary"}, {"start": 150.79999999999998, "end": 156.64, "text": " map is also created to encourage the algorithm to create outputs with better segmentation."}, {"start": 156.64, "end": 161.64, "text": " This boundary information turned out to be just as useful as the labels themselves."}, {"start": 161.64, "end": 166.72, "text": " Part trick is to create multiple discriminator networks and run them on a variety of course"}, {"start": 166.72, "end": 168.72, "text": " to find scale images."}, {"start": 168.72, "end": 173.11999999999998, "text": " There is much, much more in the paper, make sure to have a look for more details."}, {"start": 173.11999999999998, "end": 178.11999999999998, "text": " Since it is difficult to mathematically evaluate the quality of these images, a user study was"}, {"start": 178.11999999999998, "end": 179.67999999999998, "text": " carried out in the paper."}, {"start": 179.67999999999998, "end": 185.11999999999998, "text": " In the end, if we take a practical mindset, these tools are to be used by artists and it"}, {"start": 185.11999999999998, "end": 190.64, "text": " is reasonable to say that whichever one is favored by humans should be accepted as a superior"}, {"start": 190.64, "end": 191.92, "text": " method for now."}, {"start": 191.92, "end": 196.39999999999998, "text": " This tool is going to be a complete powerhouse for artists in the industry."}, {"start": 196.39999999999998, "end": 201.16, "text": " And by this, I mean right now, because the source code of this project is available to"}, {"start": 201.16, "end": 203.16, "text": " everyone, free of charge."}, {"start": 203.16, "end": 207.76, "text": " In the meantime, we have an opening at our institute at the Vienna University of Technology"}, {"start": 207.76, "end": 210.83999999999997, "text": " for one PhD student and one postdoc."}, {"start": 210.83999999999997, "end": 215.44, "text": " The link is available in the video description, read it carefully to make sure you qualify"}, {"start": 215.44, "end": 219.44, "text": " and if you apply through the email address of Professor Mikhail Vima."}, {"start": 219.44, "end": 222.2, "text": " Make sure to mention two minute papers in your message."}, {"start": 222.2, "end": 227.56, "text": " This is an excellent opportunity to turn your life around, live in an amazing city, learn"}, {"start": 227.56, "end": 230.84, "text": " a lot and write amazing papers."}, {"start": 230.84, "end": 232.92, "text": " It doesn't get any better than that."}, {"start": 232.92, "end": 234.56, "text": " That line is end of January."}, {"start": 234.56, "end": 254.68, "text": " Thanks for watching and for your generous support and I'll see you next year."}]
Two Minute Papers
https://www.youtube.com/watch?v=2ciR6rA85tg
AlphaZero: DeepMind's New Chess AI | Two Minute Papers #216
The paper "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" is available here: https://arxiv.org/pdf/1712.01815.pdf Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payments: PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A Recommendations: https://www.youtube.com/watch?v=akgalUq5vew https://www.youtube.com/watch?v=0g9SlVdv1PY https://www.youtube.com/watch?v=Ud8F-cNsa-k https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match http://forum.computerschach.de/cgi-bin/mwf/topic_show.pl?tid=9653 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Frank Goertzen, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Credits: Elo ratings: https://ratings.fide.com/top.phtml?list=men Magnus image source: https://www.youtube.com/watch?v=eLaOeXCAPbU 400 point difference rule: https://www.fide.com/fide/handbook.html?id=172&view=article ctrl+f 400 One chess match source: https://chess24.com/en/watch/live-tournaments/alphazero-vs-stockfish/1/1/1 Stockfish: https://stockfishchess.org/ Thumbnail background image credit: https://pixabay.com/photo-1483735/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehr. After defeating pretty much every highly ranked professional player in the game of Go, Google DeepMind now ventured into the realm of chess. They recently challenged not the best humans. No, no, no, that was long ago. They challenged Stockfish, the best computer chess engine in existence in quite possibly the most exciting chess-related event since Casper of Matches against Deep Blue. I will note that I was told by DeepMind that this is the preliminary version of the paper, so now we shall have an initial look and perhaps make a part 2 video with the newer results on the final paper drops. Alpha Zero is based on a neural network and reinforcement learning and is trained entirely through self-play after being given the rules of the game. It is not to be confused with Alpha Go Zero that played Go. It is also noted that this is not simply Alpha Go Zero applied to chess. This is a new variant of the algorithm. The differences include one, the rules of chess are asymmetric. For instance, pawns only move forward, castling is different on kingside and queenside, and this means that neural network-based techniques are less effective at it. Two, the algorithm not only has to predict a binary win or loss probability when given a move, but draws are also a possibility and that is to be taken into consideration. Sometimes a draw is the best we can do, actually. There are many more changes to the previous incarnation of the algorithm, please make sure to have a look at the paper for details. Before we start with the results and more details, a word on ILO ratings for perspective. The ILO rating is a number that measures the relative skill level of a player. Currently, the human player with the highest ILO rating, Magnus Carson, is hovering around 2800. This man played chess blindfolded against 10 opponents simultaneously in Vienna a couple years ago and won most of these games. That's how good he is. And Stockfish is one of the best current chess engines with an ILO rating over 3300. The difference of 500 ILO points means that if it were to play against Magnus Carson, it would be expected to win at least 95 games out of 100, though it is noted that there is a rule suggesting a hard cutoff at around 400 points difference. The two algorithms then played each other, Alpha Zero vs Stockfish. They were both given 60 seconds of thinking time per move, which is considered to be plenty, and that both of the algorithms take around 10 seconds at most per move. And here are the results. Alpha Zero was able to outperform Stockfish in about 4 hours of learning from scratch. They played 100 games. Alpha Zero won 28 times, drew 72 times, and never lost to Stockfish. Holy matter of papers, do you hear that? Stockfish is already unfathomably powerful compared to even the best human prodigies and Alpha Zero basically crushed it after 4 hours of self-play. And it was run with a similar hardware as Alpha Go Zero, one machine with 4 tensor processing units. This is hardly commodity hardware, but given the trajectory of the improvements we've seen lately, it might very well be in a couple of years. Note that Stockfish does not use machine learning and is a handcrafted algorithm. People like to refer to computer opponents in computer games as AI, but it is not doing any sort of learning. So, you know what the best part is? Alpha Zero is a much more general algorithm that can also play Shogi on an extremely high level, which is also referred to as Japanese chess, and this is one of the most interesting points. Alpha Zero would be highly useful even if it were slightly weaker than Stockfish, because it is built on more general learning algorithms that can be reused for other tasks without investing significant human effort. But in fact, it is more general and it also crashes Stockfish. With every paper from DeepMind, the algorithm becomes better and more and more general. I can tell you this is very, very rarely the case. Totally insanity. Two more interesting tidbits about the paper. One, all the domain knowledge the algorithm is given is stated precisely for clarity. Two, one might think that as computers and processing power increases over time, all we have to do is add more brute force to the algorithm and just evaluate more positions. If you think this is the case, have a look at this. It is noted that Alpha Zero was able to reliably defeat Stockfish while evaluating 10 times fewer positions per second. Maybe we could call this the AI equivalent of intuition. In other words, being able to identify a small number of promising moves and focusing on them. Let us run down my spine as I read this paper. Being a researcher is the best job in the world and we are even being paid for this. Unreal. This is a hot paper. There's a lot of discussions out there on this, lots of chess experts analyze and try to make sense of the games. I had a ton of fun reading and watching through some of these as always, two minute papers encourages you to explore and read more and the video description is ample in useful materials. You will find videos with some really cool analysis from Grandmaster Daniel King, International chess master Daniel Orange and the YouTube channel chess network, all quality materials. And if you have enjoyed this episode and you think that eight of these videos a month is worth a few dollars, please throw a coin our way on Patreon. Or if you favor cryptocurrencies instead, you can throw Bitcoin or Ethereum our way. Our support has been amazing as always, thanks so much for keeping with us through thick and thin, even in times when weird Patreon decisions happen. Luckily, this last one has been reverted. I'm honored to have supporters like you fellow scholars. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehr."}, {"start": 4.72, "end": 9.84, "text": " After defeating pretty much every highly ranked professional player in the game of Go, Google"}, {"start": 9.84, "end": 13.24, "text": " DeepMind now ventured into the realm of chess."}, {"start": 13.24, "end": 15.88, "text": " They recently challenged not the best humans."}, {"start": 15.88, "end": 17.92, "text": " No, no, no, that was long ago."}, {"start": 17.92, "end": 23.64, "text": " They challenged Stockfish, the best computer chess engine in existence in quite possibly the"}, {"start": 23.64, "end": 28.64, "text": " most exciting chess-related event since Casper of Matches against Deep Blue."}, {"start": 28.64, "end": 33.56, "text": " I will note that I was told by DeepMind that this is the preliminary version of the paper,"}, {"start": 33.56, "end": 38.96, "text": " so now we shall have an initial look and perhaps make a part 2 video with the newer results"}, {"start": 38.96, "end": 41.0, "text": " on the final paper drops."}, {"start": 41.0, "end": 46.84, "text": " Alpha Zero is based on a neural network and reinforcement learning and is trained entirely"}, {"start": 46.84, "end": 50.36, "text": " through self-play after being given the rules of the game."}, {"start": 50.36, "end": 54.36, "text": " It is not to be confused with Alpha Go Zero that played Go."}, {"start": 54.36, "end": 59.24, "text": " It is also noted that this is not simply Alpha Go Zero applied to chess."}, {"start": 59.24, "end": 61.84, "text": " This is a new variant of the algorithm."}, {"start": 61.84, "end": 66.52, "text": " The differences include one, the rules of chess are asymmetric."}, {"start": 66.52, "end": 71.96000000000001, "text": " For instance, pawns only move forward, castling is different on kingside and queenside,"}, {"start": 71.96000000000001, "end": 76.16, "text": " and this means that neural network-based techniques are less effective at it."}, {"start": 76.16, "end": 81.68, "text": " Two, the algorithm not only has to predict a binary win or loss probability when given"}, {"start": 81.68, "end": 87.4, "text": " a move, but draws are also a possibility and that is to be taken into consideration."}, {"start": 87.4, "end": 90.08000000000001, "text": " Sometimes a draw is the best we can do, actually."}, {"start": 90.08000000000001, "end": 94.08000000000001, "text": " There are many more changes to the previous incarnation of the algorithm, please make sure"}, {"start": 94.08000000000001, "end": 96.56, "text": " to have a look at the paper for details."}, {"start": 96.56, "end": 102.0, "text": " Before we start with the results and more details, a word on ILO ratings for perspective."}, {"start": 102.0, "end": 106.80000000000001, "text": " The ILO rating is a number that measures the relative skill level of a player."}, {"start": 106.8, "end": 112.28, "text": " Currently, the human player with the highest ILO rating, Magnus Carson, is hovering around"}, {"start": 112.28, "end": 113.96, "text": " 2800."}, {"start": 113.96, "end": 120.0, "text": " This man played chess blindfolded against 10 opponents simultaneously in Vienna a couple"}, {"start": 120.0, "end": 123.19999999999999, "text": " years ago and won most of these games."}, {"start": 123.19999999999999, "end": 124.52, "text": " That's how good he is."}, {"start": 124.52, "end": 130.92, "text": " And Stockfish is one of the best current chess engines with an ILO rating over 3300."}, {"start": 130.92, "end": 136.92, "text": " The difference of 500 ILO points means that if it were to play against Magnus Carson,"}, {"start": 136.92, "end": 142.72, "text": " it would be expected to win at least 95 games out of 100, though it is noted that there"}, {"start": 142.72, "end": 147.35999999999999, "text": " is a rule suggesting a hard cutoff at around 400 points difference."}, {"start": 147.35999999999999, "end": 152.07999999999998, "text": " The two algorithms then played each other, Alpha Zero vs Stockfish."}, {"start": 152.07999999999998, "end": 157.39999999999998, "text": " They were both given 60 seconds of thinking time per move, which is considered to be plenty,"}, {"start": 157.4, "end": 161.48000000000002, "text": " and that both of the algorithms take around 10 seconds at most per move."}, {"start": 161.48000000000002, "end": 163.08, "text": " And here are the results."}, {"start": 163.08, "end": 169.52, "text": " Alpha Zero was able to outperform Stockfish in about 4 hours of learning from scratch."}, {"start": 169.52, "end": 171.24, "text": " They played 100 games."}, {"start": 171.24, "end": 179.04000000000002, "text": " Alpha Zero won 28 times, drew 72 times, and never lost to Stockfish."}, {"start": 179.04000000000002, "end": 182.12, "text": " Holy matter of papers, do you hear that?"}, {"start": 182.12, "end": 187.12, "text": " Stockfish is already unfathomably powerful compared to even the best human prodigies"}, {"start": 187.12, "end": 192.6, "text": " and Alpha Zero basically crushed it after 4 hours of self-play."}, {"start": 192.6, "end": 198.64000000000001, "text": " And it was run with a similar hardware as Alpha Go Zero, one machine with 4 tensor processing"}, {"start": 198.64000000000001, "end": 199.64000000000001, "text": " units."}, {"start": 199.64000000000001, "end": 203.56, "text": " This is hardly commodity hardware, but given the trajectory of the improvements we've"}, {"start": 203.56, "end": 207.32, "text": " seen lately, it might very well be in a couple of years."}, {"start": 207.32, "end": 212.04000000000002, "text": " Note that Stockfish does not use machine learning and is a handcrafted algorithm."}, {"start": 212.04000000000002, "end": 216.96, "text": " People like to refer to computer opponents in computer games as AI, but it is not doing"}, {"start": 216.96, "end": 217.96, "text": " any sort of learning."}, {"start": 217.96, "end": 220.68, "text": " So, you know what the best part is?"}, {"start": 220.68, "end": 226.12, "text": " Alpha Zero is a much more general algorithm that can also play Shogi on an extremely high"}, {"start": 226.12, "end": 231.12, "text": " level, which is also referred to as Japanese chess, and this is one of the most interesting"}, {"start": 231.12, "end": 232.12, "text": " points."}, {"start": 232.12, "end": 237.36, "text": " Alpha Zero would be highly useful even if it were slightly weaker than Stockfish, because"}, {"start": 237.36, "end": 242.84, "text": " it is built on more general learning algorithms that can be reused for other tasks without"}, {"start": 242.84, "end": 245.32, "text": " investing significant human effort."}, {"start": 245.32, "end": 250.64, "text": " But in fact, it is more general and it also crashes Stockfish."}, {"start": 250.64, "end": 256.4, "text": " With every paper from DeepMind, the algorithm becomes better and more and more general."}, {"start": 256.4, "end": 260.04, "text": " I can tell you this is very, very rarely the case."}, {"start": 260.04, "end": 261.2, "text": " Totally insanity."}, {"start": 261.2, "end": 264.0, "text": " Two more interesting tidbits about the paper."}, {"start": 264.0, "end": 269.2, "text": " One, all the domain knowledge the algorithm is given is stated precisely for clarity."}, {"start": 269.2, "end": 275.0, "text": " Two, one might think that as computers and processing power increases over time, all"}, {"start": 275.0, "end": 280.72, "text": " we have to do is add more brute force to the algorithm and just evaluate more positions."}, {"start": 280.72, "end": 283.8, "text": " If you think this is the case, have a look at this."}, {"start": 283.8, "end": 290.96, "text": " It is noted that Alpha Zero was able to reliably defeat Stockfish while evaluating 10 times"}, {"start": 290.96, "end": 293.16, "text": " fewer positions per second."}, {"start": 293.16, "end": 296.84, "text": " Maybe we could call this the AI equivalent of intuition."}, {"start": 296.84, "end": 302.28, "text": " In other words, being able to identify a small number of promising moves and focusing"}, {"start": 302.28, "end": 303.28, "text": " on them."}, {"start": 303.28, "end": 306.47999999999996, "text": " Let us run down my spine as I read this paper."}, {"start": 306.47999999999996, "end": 311.52, "text": " Being a researcher is the best job in the world and we are even being paid for this."}, {"start": 311.52, "end": 312.52, "text": " Unreal."}, {"start": 312.52, "end": 313.52, "text": " This is a hot paper."}, {"start": 313.52, "end": 318.79999999999995, "text": " There's a lot of discussions out there on this, lots of chess experts analyze and try"}, {"start": 318.79999999999995, "end": 320.35999999999996, "text": " to make sense of the games."}, {"start": 320.35999999999996, "end": 325.2, "text": " I had a ton of fun reading and watching through some of these as always, two minute papers"}, {"start": 325.2, "end": 331.55999999999995, "text": " encourages you to explore and read more and the video description is ample in useful materials."}, {"start": 331.56, "end": 337.52, "text": " You will find videos with some really cool analysis from Grandmaster Daniel King, International"}, {"start": 337.52, "end": 344.24, "text": " chess master Daniel Orange and the YouTube channel chess network, all quality materials."}, {"start": 344.24, "end": 348.36, "text": " And if you have enjoyed this episode and you think that eight of these videos a month"}, {"start": 348.36, "end": 352.68, "text": " is worth a few dollars, please throw a coin our way on Patreon."}, {"start": 352.68, "end": 358.08, "text": " Or if you favor cryptocurrencies instead, you can throw Bitcoin or Ethereum our way."}, {"start": 358.08, "end": 362.56, "text": " Our support has been amazing as always, thanks so much for keeping with us through thick"}, {"start": 362.56, "end": 366.76, "text": " and thin, even in times when weird Patreon decisions happen."}, {"start": 366.76, "end": 369.2, "text": " Luckily, this last one has been reverted."}, {"start": 369.2, "end": 372.4, "text": " I'm honored to have supporters like you fellow scholars."}, {"start": 372.4, "end": 392.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YjjTPV2pXY0
AI Learns Noise Filtering For Photorealistic Videos | Two Minute Papers #215
The paper "Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder" is available here: http://research.nvidia.com/publication/interactive-reconstruction-monte-carlo-image-sequences-using-recurrent-denoising The paper with the notoriously difficult "Spheres" scene: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payments: PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2379965/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Jolna Ife here. This is another one of those amazing papers that I am really excited about. And the reason for that is that this is in the intersection of computer graphics and machine learning, which, as you know, is already enough to make me happy, but when I first seen the quality of the results, I was delighted to see that it delivered exactly what I was hoping for. Light simulation programs are an important subfield of computer graphics where we try to create a photorealistic image of a 3D digital scene by simulating the path of millions and millions of light rays. First, we start out with a noisy image, and as we compute more paths, it slowly clears up. However, it takes a very long time to get a perfectly clear image, and depending on the scene and the algorithm, it can take from minutes to hours. In an earlier work, we had a beautiful but pathological scene that took weeks to render on several machines. If you would like to hear more about that, the link is available in the video description. So in order to alleviate this problem, many noise filtering algorithms surfaced over the years. The goal of these algorithms is that instead of computing more and more paths until the image clears up, we stop at a noisy image and try to guess what the final image would look like. This often happens in the presence of some additional depth and geometry information, additional images that are often referred to as feature buffers or auxiliary buffers. This information helps the noise filter to get a better understanding of the scene and produce higher quality outputs. Recently, a few learning-based algorithms emerged with excellent results. Well, excellent would be an understatement, since these can take an extremely noisy image that we rendered with one ray per pixel. This is as noisy as it gets I'm afraid, and it is absolutely stunning that we can still get usable images out of this. However, these algorithms are not capable of dealing with sequences of data and are condemned to deal with each of these images in isolation. They have no understanding of the fact that we are dealing with an animation. What does this mean exactly? What this means is that the network has no memory of how it dealt with the previous image and if we combine it with the fact that a trace amount of noise still remains in the images, we get a disturbing flickering effect. This is because the remainder of the noise is different from image to image. This technique uses a recurrent neural network which is able to deal with sequences of data, for instance, in our case, video. It remembers how it dealt with the previous images a few moments ago, and as a result, it can adjust and produce outputs that are temporarily stable. Computer graphics researchers like to call this spatio-temporal filtering. You can see in this camera panning experiment how much smoother this new technique is. Let's try the same footage, slow down, and see if we get a better view of the flickering. Yep, all good. Recurrent neural networks are by no means easy to train and need quite a few implementation details to get it right, so make sure to have a look at the paper for details. Temporal coherent light simulation reconstruction of noisy images from one sample per pixel. And for video. This is insanity. I would go out on a limp and say that in the very near future, we'll run learning based noise filters that take images that are so noisy, they don't even have one ray sample per pixel. Maybe one every other pixel or so. This is going to be the new milestone. If someone told me that this would be possible, when I started doing light transport as an undergrad student, I wouldn't have believed a word of it. Computer games, VR, and all kinds of real-time applications will be able to get photorealistic light simulation graphics in real-time, and temporarily stable. I need to take some time to digest this. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Jolna Ife here."}, {"start": 4.28, "end": 8.48, "text": " This is another one of those amazing papers that I am really excited about."}, {"start": 8.48, "end": 13.96, "text": " And the reason for that is that this is in the intersection of computer graphics and machine learning,"}, {"start": 13.96, "end": 17.16, "text": " which, as you know, is already enough to make me happy,"}, {"start": 17.16, "end": 23.76, "text": " but when I first seen the quality of the results, I was delighted to see that it delivered exactly what I was hoping for."}, {"start": 23.76, "end": 27.64, "text": " Light simulation programs are an important subfield of computer graphics"}, {"start": 27.64, "end": 32.04, "text": " where we try to create a photorealistic image of a 3D digital scene"}, {"start": 32.04, "end": 36.04, "text": " by simulating the path of millions and millions of light rays."}, {"start": 36.04, "end": 41.4, "text": " First, we start out with a noisy image, and as we compute more paths, it slowly clears up."}, {"start": 41.4, "end": 45.08, "text": " However, it takes a very long time to get a perfectly clear image,"}, {"start": 45.08, "end": 49.480000000000004, "text": " and depending on the scene and the algorithm, it can take from minutes to hours."}, {"start": 49.480000000000004, "end": 53.36, "text": " In an earlier work, we had a beautiful but pathological scene"}, {"start": 53.36, "end": 56.400000000000006, "text": " that took weeks to render on several machines."}, {"start": 56.4, "end": 60.44, "text": " If you would like to hear more about that, the link is available in the video description."}, {"start": 60.44, "end": 65.8, "text": " So in order to alleviate this problem, many noise filtering algorithms surfaced over the years."}, {"start": 65.8, "end": 69.8, "text": " The goal of these algorithms is that instead of computing more and more paths"}, {"start": 69.8, "end": 73.44, "text": " until the image clears up, we stop at a noisy image"}, {"start": 73.44, "end": 76.52, "text": " and try to guess what the final image would look like."}, {"start": 76.52, "end": 81.4, "text": " This often happens in the presence of some additional depth and geometry information,"}, {"start": 81.4, "end": 86.80000000000001, "text": " additional images that are often referred to as feature buffers or auxiliary buffers."}, {"start": 86.80000000000001, "end": 91.2, "text": " This information helps the noise filter to get a better understanding of the scene"}, {"start": 91.2, "end": 93.44000000000001, "text": " and produce higher quality outputs."}, {"start": 93.44000000000001, "end": 98.0, "text": " Recently, a few learning-based algorithms emerged with excellent results."}, {"start": 98.0, "end": 100.48, "text": " Well, excellent would be an understatement,"}, {"start": 100.48, "end": 105.64000000000001, "text": " since these can take an extremely noisy image that we rendered with one ray per pixel."}, {"start": 105.64000000000001, "end": 107.92, "text": " This is as noisy as it gets I'm afraid,"}, {"start": 107.92, "end": 112.48, "text": " and it is absolutely stunning that we can still get usable images out of this."}, {"start": 112.48, "end": 116.92, "text": " However, these algorithms are not capable of dealing with sequences of data"}, {"start": 116.92, "end": 120.68, "text": " and are condemned to deal with each of these images in isolation."}, {"start": 120.68, "end": 124.72, "text": " They have no understanding of the fact that we are dealing with an animation."}, {"start": 124.72, "end": 126.24000000000001, "text": " What does this mean exactly?"}, {"start": 126.24000000000001, "end": 131.12, "text": " What this means is that the network has no memory of how it dealt with the previous image"}, {"start": 131.12, "end": 136.16, "text": " and if we combine it with the fact that a trace amount of noise still remains in the images,"}, {"start": 136.16, "end": 138.48, "text": " we get a disturbing flickering effect."}, {"start": 138.48, "end": 142.48, "text": " This is because the remainder of the noise is different from image to image."}, {"start": 142.48, "end": 147.6, "text": " This technique uses a recurrent neural network which is able to deal with sequences of data,"}, {"start": 147.6, "end": 150.07999999999998, "text": " for instance, in our case, video."}, {"start": 150.07999999999998, "end": 153.92, "text": " It remembers how it dealt with the previous images a few moments ago,"}, {"start": 153.92, "end": 159.35999999999999, "text": " and as a result, it can adjust and produce outputs that are temporarily stable."}, {"start": 159.35999999999999, "end": 163.8, "text": " Computer graphics researchers like to call this spatio-temporal filtering."}, {"start": 163.8, "end": 169.0, "text": " You can see in this camera panning experiment how much smoother this new technique is."}, {"start": 169.0, "end": 174.04000000000002, "text": " Let's try the same footage, slow down, and see if we get a better view of the flickering."}, {"start": 177.56, "end": 179.12, "text": " Yep, all good."}, {"start": 179.12, "end": 182.52, "text": " Recurrent neural networks are by no means easy to train"}, {"start": 182.52, "end": 185.64000000000001, "text": " and need quite a few implementation details to get it right,"}, {"start": 185.64000000000001, "end": 188.28, "text": " so make sure to have a look at the paper for details."}, {"start": 188.28, "end": 194.92000000000002, "text": " Temporal coherent light simulation reconstruction of noisy images from one sample per pixel."}, {"start": 194.92000000000002, "end": 196.44, "text": " And for video."}, {"start": 196.44, "end": 198.12, "text": " This is insanity."}, {"start": 198.12, "end": 201.56, "text": " I would go out on a limp and say that in the very near future,"}, {"start": 201.56, "end": 206.28, "text": " we'll run learning based noise filters that take images that are so noisy,"}, {"start": 206.28, "end": 209.16, "text": " they don't even have one ray sample per pixel."}, {"start": 209.16, "end": 211.56, "text": " Maybe one every other pixel or so."}, {"start": 211.56, "end": 213.64, "text": " This is going to be the new milestone."}, {"start": 213.64, "end": 216.04, "text": " If someone told me that this would be possible,"}, {"start": 216.04, "end": 219.16, "text": " when I started doing light transport as an undergrad student,"}, {"start": 219.16, "end": 221.4, "text": " I wouldn't have believed a word of it."}, {"start": 221.4, "end": 224.84, "text": " Computer games, VR, and all kinds of real-time applications"}, {"start": 224.84, "end": 229.72, "text": " will be able to get photorealistic light simulation graphics in real-time,"}, {"start": 229.72, "end": 231.88, "text": " and temporarily stable."}, {"start": 231.88, "end": 234.12, "text": " I need to take some time to digest this."}, {"start": 234.12, "end": 253.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QmIM24JDE3A
AI Beats Radiologists at Pneumonia Detection | Two Minute Papers #214
The paper "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning" is available here: https://stanfordmlgroup.github.io/projects/chexnet/ Interesting commentary on the article, check this one out too! https://lukeoakdenrayner.wordpress.com/2017/11/18/quick-thoughts-on-chestxray14-performance-claims-and-clinical-tasks/ Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payments: PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://flic.kr/p/bCaBTq Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. In this work, a 121-layer convolution on your own network is trained to recognize pneumonia and 13 different diseases. Numonia is an inflammatory lung condition that is responsible for a million hospitalizations and 50,000 deaths per year in the U.S. alone. Such an algorithm requires a training set of formidable size to work properly. This means a bunch of input output pairs. In this case, one training sample is an input frontal x-ray image of the chest and the outputs are annotations by experts who mark which of the 14 sought diseases are present in this sample. So they say like, this image contains pneumonia here and this doesn't. This is not just a binary yes or no answer, but a more detailed heat map of possible regions that fit the diagnosis. The training set used for this algorithm contained over 100,000 images of 30,000 patients. This is then given to the neural network and its task is to learn the properties of these diseases by itself. Then after the learning process took place, previously unseen images are given to the algorithm and a set of radiologists. This is called a test set and of course it is crucial that both the training and the test sets are reliable. If the training and test set is created by one expert radiologist and then we again benchmark a neural network against a different randomly picked radiologist, that's not a very reliable process because each of the humans may be wrong in more than a few cases. Instead, the training and test set annotation data is created by asking multiple radiologists and taking a majority vote on their decisions. So now that the training and test data is reliable, we can properly benchmark a human versus a neural network. And here's the result. This learning algorithm outperforms the average human radiologist. The performance was measured in a 2D space where sensitivity and specificity were the two interesting metrics. Sensitivity means the proportion of positive samples that were classified as positive and specificity means the proportion of negative samples that were classified as negative. The crosses mean the human doctors and as you can see whichever radiologist we look at, even though they have very different false positive and negative ratios, they are all located below the blue curve which denotes the results of the learning algorithm. This is a simple diagram but if you think about what it actually means, this is an incredible application of machine intelligence. And now a word on limitations. It is noted that this was an isolated test. For instance, the radiologists were only given one image and usually when diagnosing someone, they know more about the history of the patient that may further help their decisions. For instance, a history of a strong cough and high fever is highly useful supplementary information for humans when diagnosing someone who may have pneumonia. Beyond only the frontal view of the chest, it is also standard practice to use the lateral view as well if the results are inconclusive. These views are not available in this data set and it is conjectured that it may sway the comparison towards humans. However, I'll note that this information may also benefit the AI just as much as the radiologists and this seems like a suitable direction for future work. Finally, this is not the only algorithm for pneumonia detection and it has been compared to the state of the art for all 14 diseases and this new technique came out on top on all of them. Also have a look at the paper for details because training a 121 layer neural network requires some clever shenanigans as this was the case here too. It is really delightful to see that these learning algorithms can help diagnosing serious illnesses and provide higher quality healthcare to more and more people around the world, especially in places where access to expert radiologists is limited. Everyone needs to hear about this. If you wish to help us spreading the word and telling these incredible stories to even more people, please consider supporting us on Patreon. We also know that many of you are crazy for Bitcoin so we also set up a Bitcoin address as well. Details are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.42, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.42, "end": 11.540000000000001, "text": " In this work, a 121-layer convolution on your own network is trained to recognize pneumonia"}, {"start": 11.540000000000001, "end": 13.780000000000001, "text": " and 13 different diseases."}, {"start": 13.780000000000001, "end": 19.5, "text": " Numonia is an inflammatory lung condition that is responsible for a million hospitalizations"}, {"start": 19.5, "end": 23.46, "text": " and 50,000 deaths per year in the U.S. alone."}, {"start": 23.46, "end": 28.42, "text": " Such an algorithm requires a training set of formidable size to work properly."}, {"start": 28.42, "end": 31.14, "text": " This means a bunch of input output pairs."}, {"start": 31.14, "end": 37.300000000000004, "text": " In this case, one training sample is an input frontal x-ray image of the chest and the outputs"}, {"start": 37.300000000000004, "end": 42.92, "text": " are annotations by experts who mark which of the 14 sought diseases are present in this"}, {"start": 42.92, "end": 43.92, "text": " sample."}, {"start": 43.92, "end": 47.900000000000006, "text": " So they say like, this image contains pneumonia here and this doesn't."}, {"start": 47.900000000000006, "end": 53.28, "text": " This is not just a binary yes or no answer, but a more detailed heat map of possible regions"}, {"start": 53.28, "end": 55.02, "text": " that fit the diagnosis."}, {"start": 55.02, "end": 61.7, "text": " The training set used for this algorithm contained over 100,000 images of 30,000 patients."}, {"start": 61.7, "end": 66.58, "text": " This is then given to the neural network and its task is to learn the properties of these"}, {"start": 66.58, "end": 68.54, "text": " diseases by itself."}, {"start": 68.54, "end": 73.02000000000001, "text": " Then after the learning process took place, previously unseen images are given to the"}, {"start": 73.02000000000001, "end": 76.54, "text": " algorithm and a set of radiologists."}, {"start": 76.54, "end": 81.62, "text": " This is called a test set and of course it is crucial that both the training and the test"}, {"start": 81.62, "end": 83.30000000000001, "text": " sets are reliable."}, {"start": 83.3, "end": 89.14, "text": " If the training and test set is created by one expert radiologist and then we again benchmark"}, {"start": 89.14, "end": 94.42, "text": " a neural network against a different randomly picked radiologist, that's not a very reliable"}, {"start": 94.42, "end": 99.42, "text": " process because each of the humans may be wrong in more than a few cases."}, {"start": 99.42, "end": 105.3, "text": " Instead, the training and test set annotation data is created by asking multiple radiologists"}, {"start": 105.3, "end": 108.42, "text": " and taking a majority vote on their decisions."}, {"start": 108.42, "end": 114.22, "text": " So now that the training and test data is reliable, we can properly benchmark a human versus"}, {"start": 114.22, "end": 115.58, "text": " a neural network."}, {"start": 115.58, "end": 116.82000000000001, "text": " And here's the result."}, {"start": 116.82000000000001, "end": 121.54, "text": " This learning algorithm outperforms the average human radiologist."}, {"start": 121.54, "end": 126.86, "text": " The performance was measured in a 2D space where sensitivity and specificity were the two"}, {"start": 126.86, "end": 128.9, "text": " interesting metrics."}, {"start": 128.9, "end": 133.18, "text": " Sensitivity means the proportion of positive samples that were classified as positive"}, {"start": 133.18, "end": 138.54000000000002, "text": " and specificity means the proportion of negative samples that were classified as negative."}, {"start": 138.54000000000002, "end": 144.1, "text": " The crosses mean the human doctors and as you can see whichever radiologist we look at,"}, {"start": 144.1, "end": 149.5, "text": " even though they have very different false positive and negative ratios, they are all located"}, {"start": 149.5, "end": 153.78, "text": " below the blue curve which denotes the results of the learning algorithm."}, {"start": 153.78, "end": 159.26000000000002, "text": " This is a simple diagram but if you think about what it actually means, this is an incredible"}, {"start": 159.26000000000002, "end": 161.70000000000002, "text": " application of machine intelligence."}, {"start": 161.7, "end": 163.89999999999998, "text": " And now a word on limitations."}, {"start": 163.89999999999998, "end": 166.61999999999998, "text": " It is noted that this was an isolated test."}, {"start": 166.61999999999998, "end": 172.17999999999998, "text": " For instance, the radiologists were only given one image and usually when diagnosing someone,"}, {"start": 172.17999999999998, "end": 176.94, "text": " they know more about the history of the patient that may further help their decisions."}, {"start": 176.94, "end": 181.98, "text": " For instance, a history of a strong cough and high fever is highly useful supplementary"}, {"start": 181.98, "end": 186.29999999999998, "text": " information for humans when diagnosing someone who may have pneumonia."}, {"start": 186.29999999999998, "end": 190.98, "text": " Beyond only the frontal view of the chest, it is also standard practice to use the lateral"}, {"start": 190.98, "end": 194.01999999999998, "text": " view as well if the results are inconclusive."}, {"start": 194.01999999999998, "end": 198.17999999999998, "text": " These views are not available in this data set and it is conjectured that it may sway"}, {"start": 198.17999999999998, "end": 200.26, "text": " the comparison towards humans."}, {"start": 200.26, "end": 205.54, "text": " However, I'll note that this information may also benefit the AI just as much as the"}, {"start": 205.54, "end": 209.73999999999998, "text": " radiologists and this seems like a suitable direction for future work."}, {"start": 209.73999999999998, "end": 214.7, "text": " Finally, this is not the only algorithm for pneumonia detection and it has been compared"}, {"start": 214.7, "end": 220.94, "text": " to the state of the art for all 14 diseases and this new technique came out on top on all"}, {"start": 220.94, "end": 221.94, "text": " of them."}, {"start": 221.94, "end": 228.1, "text": " Also have a look at the paper for details because training a 121 layer neural network requires"}, {"start": 228.1, "end": 231.38, "text": " some clever shenanigans as this was the case here too."}, {"start": 231.38, "end": 236.94, "text": " It is really delightful to see that these learning algorithms can help diagnosing serious illnesses"}, {"start": 236.94, "end": 241.73999999999998, "text": " and provide higher quality healthcare to more and more people around the world, especially"}, {"start": 241.74, "end": 246.06, "text": " in places where access to expert radiologists is limited."}, {"start": 246.06, "end": 247.58, "text": " Everyone needs to hear about this."}, {"start": 247.58, "end": 251.62, "text": " If you wish to help us spreading the word and telling these incredible stories to even"}, {"start": 251.62, "end": 255.02, "text": " more people, please consider supporting us on Patreon."}, {"start": 255.02, "end": 259.98, "text": " We also know that many of you are crazy for Bitcoin so we also set up a Bitcoin address"}, {"start": 259.98, "end": 260.98, "text": " as well."}, {"start": 260.98, "end": 263.1, "text": " Details are available in the video description."}, {"start": 263.1, "end": 281.3, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=v1oWke0Qf1E
Universal Neural Style Transfer | Two Minute Papers #213
The paper "Universal Style Transfer via Feature Transforms" and its source code is available here: https://arxiv.org/abs/1705.08086 https://github.com/Yijunmaverick/UniversalStyleTransfer Recommended for you: https://www.youtube.com/watch?v=Rdpbnd0pCiI - What is an Autoencoder? We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payments: PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1978682/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. Let's have a look at some recent results on mural style transfer. You know the drill, we take a photo with some content, and for example, a painting with the desired style, and the output is an image where this style is applied to our content. If this is done well and with good taste, it really looks like magic. However, for pretty much all the previous techniques, there are always some mysterious styles that result in failure cases. And the reason for this is the fact that these techniques are trained on a set of style images, and if they face a style that is wildly different from these training images, the results won't be very usable. This new algorithm is also based on neural networks, and it doesn't need to be trained on these style images, but it can perform high-quality style transfer, and it works on arbitrary styles. This sounds a bit like black magic. So how does this happen exactly? First, an auto-ank order is trained for image reconstruction. An auto-ank order is a neural network where the input and output image is supposed to be the same thing. So far, this doesn't make any sense, because all the neural network does is copy and pasting the inputs to the outputs. Not very useful. However, if we reduce the number of neurons in one of the middle layers to very, very few neurons compared to the others, we get a bottleneck. This bottleneck essentially hamstrings the neural network and forces it to first come up with a highly compressed representation of an image. This is the encoder network, and then reconstruct the full image from this compressed representation. This is called the decoder network. So encoding is compression, decoding is decompression, or more intuitively reconstruction. This compressed representation can be thought of as the essence of the image, which is a very concise representation, but carefully crafted such that a full reconstruction of the image can take place based on it. Auto-ank orders are previous work, and if you would like to hear more about them, check the video description as we have dedicated an earlier episode to it. And now, the value proposition of this work comes from the fact that we don't just use the auto-ank order as is, but rip this network in half and use the encoder part on both the input style and content images. This way the concept of style transfer is much, much simpler in this compressed representation. In the end, we are not stuck with this compressed result because if you remember, we also have a decoder, which is the second part of the neural network that performs a reconstruction of an image from this compressed essence. As a result, we don't have to train this neural network on the style images, and it will work with any chosen style. Tell you, with most style transfer techniques, we are given an output image and we either have to take it or leave it because we can't apply any meaningful edits to it. A cool corollary of this design decision is that we can also get closer to our artistic vision by fiddling with parameters. For instance, the scale and weight of the style transfer can be changed on the fly to our liking. As always, the new technique is compared to a bunch of other competing algorithms. Due to the general and lightweight nature of this method, it seems to perform more consistently across a set of widely varying input styles. We can also create some mats for our target image and apply different artistic styles to different parts of it. Local parts of a style can also be transferred. Remember, the first style transfer technique was amazing, but very limited and took an hour on a state of the art graphics card in a desktop computer. This one takes less than a second and works for any style. Now as more new phones contain chips for performing deep learning, we can likely look forward to a totally amazing future where style transfer can be done in our pockets and in real time. What a time it is to be alive. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.5600000000000005, "end": 8.24, "text": " Let's have a look at some recent results on mural style transfer."}, {"start": 8.24, "end": 12.72, "text": " You know the drill, we take a photo with some content, and for example, a painting with"}, {"start": 12.72, "end": 18.88, "text": " the desired style, and the output is an image where this style is applied to our content."}, {"start": 18.88, "end": 22.92, "text": " If this is done well and with good taste, it really looks like magic."}, {"start": 22.92, "end": 26.96, "text": " However, for pretty much all the previous techniques, there are always some mysterious"}, {"start": 26.96, "end": 29.76, "text": " styles that result in failure cases."}, {"start": 29.76, "end": 33.92, "text": " And the reason for this is the fact that these techniques are trained on a set of style"}, {"start": 33.92, "end": 38.56, "text": " images, and if they face a style that is wildly different from these training images,"}, {"start": 38.56, "end": 40.72, "text": " the results won't be very usable."}, {"start": 40.72, "end": 45.040000000000006, "text": " This new algorithm is also based on neural networks, and it doesn't need to be trained"}, {"start": 45.040000000000006, "end": 52.32, "text": " on these style images, but it can perform high-quality style transfer, and it works on arbitrary"}, {"start": 52.32, "end": 53.32, "text": " styles."}, {"start": 53.32, "end": 55.08, "text": " This sounds a bit like black magic."}, {"start": 55.08, "end": 56.92, "text": " So how does this happen exactly?"}, {"start": 56.92, "end": 60.84, "text": " First, an auto-ank order is trained for image reconstruction."}, {"start": 60.84, "end": 65.24000000000001, "text": " An auto-ank order is a neural network where the input and output image is supposed to be"}, {"start": 65.24000000000001, "end": 66.6, "text": " the same thing."}, {"start": 66.6, "end": 71.76, "text": " So far, this doesn't make any sense, because all the neural network does is copy and pasting"}, {"start": 71.76, "end": 73.64, "text": " the inputs to the outputs."}, {"start": 73.64, "end": 74.64, "text": " Not very useful."}, {"start": 74.64, "end": 79.96000000000001, "text": " However, if we reduce the number of neurons in one of the middle layers to very, very few"}, {"start": 79.96000000000001, "end": 83.4, "text": " neurons compared to the others, we get a bottleneck."}, {"start": 83.4, "end": 88.0, "text": " This bottleneck essentially hamstrings the neural network and forces it to first come"}, {"start": 88.0, "end": 91.4, "text": " up with a highly compressed representation of an image."}, {"start": 91.4, "end": 97.56, "text": " This is the encoder network, and then reconstruct the full image from this compressed representation."}, {"start": 97.56, "end": 99.68, "text": " This is called the decoder network."}, {"start": 99.68, "end": 105.84, "text": " So encoding is compression, decoding is decompression, or more intuitively reconstruction."}, {"start": 105.84, "end": 109.96000000000001, "text": " This compressed representation can be thought of as the essence of the image, which is a"}, {"start": 109.96, "end": 115.8, "text": " very concise representation, but carefully crafted such that a full reconstruction of the"}, {"start": 115.8, "end": 118.52, "text": " image can take place based on it."}, {"start": 118.52, "end": 122.16, "text": " Auto-ank orders are previous work, and if you would like to hear more about them, check"}, {"start": 122.16, "end": 125.88, "text": " the video description as we have dedicated an earlier episode to it."}, {"start": 125.88, "end": 130.56, "text": " And now, the value proposition of this work comes from the fact that we don't just use"}, {"start": 130.56, "end": 136.68, "text": " the auto-ank order as is, but rip this network in half and use the encoder part on both"}, {"start": 136.68, "end": 139.35999999999999, "text": " the input style and content images."}, {"start": 139.36, "end": 145.04000000000002, "text": " This way the concept of style transfer is much, much simpler in this compressed representation."}, {"start": 145.04000000000002, "end": 149.72000000000003, "text": " In the end, we are not stuck with this compressed result because if you remember, we also have"}, {"start": 149.72000000000003, "end": 154.54000000000002, "text": " a decoder, which is the second part of the neural network that performs a reconstruction"}, {"start": 154.54000000000002, "end": 157.48000000000002, "text": " of an image from this compressed essence."}, {"start": 157.48000000000002, "end": 162.12, "text": " As a result, we don't have to train this neural network on the style images, and it will"}, {"start": 162.12, "end": 164.48000000000002, "text": " work with any chosen style."}, {"start": 164.48, "end": 169.56, "text": " Tell you, with most style transfer techniques, we are given an output image and we either have"}, {"start": 169.56, "end": 173.79999999999998, "text": " to take it or leave it because we can't apply any meaningful edits to it."}, {"start": 173.79999999999998, "end": 178.83999999999997, "text": " A cool corollary of this design decision is that we can also get closer to our artistic"}, {"start": 178.83999999999997, "end": 181.35999999999999, "text": " vision by fiddling with parameters."}, {"start": 181.35999999999999, "end": 185.76, "text": " For instance, the scale and weight of the style transfer can be changed on the fly to"}, {"start": 185.76, "end": 186.95999999999998, "text": " our liking."}, {"start": 186.95999999999998, "end": 191.6, "text": " As always, the new technique is compared to a bunch of other competing algorithms."}, {"start": 191.6, "end": 196.51999999999998, "text": " Due to the general and lightweight nature of this method, it seems to perform more consistently"}, {"start": 196.51999999999998, "end": 204.64, "text": " across a set of widely varying input styles."}, {"start": 204.64, "end": 209.84, "text": " We can also create some mats for our target image and apply different artistic styles to"}, {"start": 209.84, "end": 213.6, "text": " different parts of it."}, {"start": 213.6, "end": 216.56, "text": " Local parts of a style can also be transferred."}, {"start": 216.56, "end": 222.2, "text": " Remember, the first style transfer technique was amazing, but very limited and took an hour"}, {"start": 222.2, "end": 225.52, "text": " on a state of the art graphics card in a desktop computer."}, {"start": 225.52, "end": 229.68, "text": " This one takes less than a second and works for any style."}, {"start": 229.68, "end": 234.92000000000002, "text": " Now as more new phones contain chips for performing deep learning, we can likely look forward"}, {"start": 234.92000000000002, "end": 241.68, "text": " to a totally amazing future where style transfer can be done in our pockets and in real time."}, {"start": 241.68, "end": 243.36, "text": " What a time it is to be alive."}, {"start": 243.36, "end": 247.04000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6JZNEb5uDu4
This Neural Network Optimizes Itself | Two Minute Papers #212
The paper "Hierarchical Representations for Efficient Architecture Search" is available here: https://arxiv.org/pdf/1711.00436.pdf Genetic algorithm (+ Mona Lisa problem) implementation: 1. https://users.cg.tuwien.ac.at/zsolnai/gfx/mona_lisa_parallel_genetic_algorithm/ 2. https://users.cg.tuwien.ac.at/zsolnai/gfx/knapsack_genetic/ Andrej Karpathy's online demo: http://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html Overfitting and Regularization For Deep Learning - https://www.youtube.com/watch?v=6aF9sJrzxaM Training Deep Neural Networks With Dropout - https://www.youtube.com/watch?v=LhhEv1dMpKE How Do Genetic Algorithms Work? - https://www.youtube.com/watch?v=ziMHaGQJuSI We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers One-time payments: PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2692456/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehair. As we know from the series, neural network-based techniques are extraordinarily successful in defeating problems that were considered to be absolutely impossible as little as 10 years ago. When we'd like to use them for something, choosing the right kind of neural network is one part of the task, but usually the even bigger problem is choosing the right architecture. Typically, at a bare minimum, means the type and number of layers in the network and the number of neurons to be used in each layer. Bigger networks can learn solutions for more complex problems, so it seems that the answer is quite easy. Just throw the biggest possible neural network we can at the problem and hope for the best, but if you think that it is that easy or trivial, you need to think again. Here's why. Bigger networks come at a cost. They take longer to train and even worse, if we have networks that are too big, we bump into the problem of overfitting. Overfitting is a phenomenon when a learning algorithm starts essentially memorizing the training data without actually doing the learning. As a result, its knowledge is not going to generalize for unseen data at all. Imagine a student in a school who has a tremendous aptitude in memorizing everything from the textbook. If the exam questions happen to be the same, this student will do extremely well, but in the case of even the slightest deviations, well, too bad. Even though people like to call this road learning, there is nothing about the whole process that resembles any kind of learning at all. A smaller neural network, a less knowledgeable student who has done their homework properly would do way way better. So this is overfitting, the bane of so many modern learning algorithms. It can be kind of defeated by using techniques like L1 and L2 regularization or dropout. These often help, but none of them are silver bullets. If you would like to hear more about these, we've covered them in an earlier episode, actually two episodes. As always, the links are in the video description for the more curious fellow scholars out there. So the algorithm itself is learning, but for some reason we have to design their architecture by hand. As we discussed, some architectures, like some students, of course, significantly outperform other ones, and we are left to perform a lengthy trial and error to find the best ones by hand. So speaking about learning algorithms, why don't we make them learn their own architectures? And this new algorithm is about architecture search that does exactly that. I note that this is by far not the first crack at this problem, but it definitely is a remarkable improvement over the state of the art. It represents the neural network architecture as an organism and makes it evolve via genetic programming. This is just as cool as you would think it is and not half as complex as you may imagine at first. We have an earlier episode on genetic algorithms. I wrote some source code as well, which is available free of charge for everyone. Make sure to have a look at the video description for more on that. You'll love it. In this chart, you can see the number of evolution steps on the horizontal x-axis and the performance of these evolved architectures over time on the vertical y-axis. Finally, after taking about one and a half days to perform these few thousand evolutionary steps, the best architectures found by this algorithm are only slightly inferior to the best existing neural networks for many classical datasets, which is bloody amazing. Please refer to the paper for details and comparisons against the state of the art neural networks and other architecture search approaches, there are lots of very easily readable results reported there. Note that this is still a preliminary work and uses hundreds of graphics cards in the process. However, if you remember how it went with AlphaGo, the computational costs were cut down by a factor of 10 within a little more than a year. And until that happens, we have learning algorithms that learn to optimize themselves. This sounds like science fiction. How cool is that? Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehair."}, {"start": 4.48, "end": 9.84, "text": " As we know from the series, neural network-based techniques are extraordinarily successful in defeating"}, {"start": 9.84, "end": 15.64, "text": " problems that were considered to be absolutely impossible as little as 10 years ago."}, {"start": 15.64, "end": 20.0, "text": " When we'd like to use them for something, choosing the right kind of neural network is one"}, {"start": 20.0, "end": 26.16, "text": " part of the task, but usually the even bigger problem is choosing the right architecture."}, {"start": 26.16, "end": 32.0, "text": " Typically, at a bare minimum, means the type and number of layers in the network and the number"}, {"start": 32.0, "end": 38.24, "text": " of neurons to be used in each layer. Bigger networks can learn solutions for more complex problems,"}, {"start": 38.24, "end": 44.64, "text": " so it seems that the answer is quite easy. Just throw the biggest possible neural network we can"}, {"start": 44.64, "end": 49.44, "text": " at the problem and hope for the best, but if you think that it is that easy or trivial,"}, {"start": 49.44, "end": 55.120000000000005, "text": " you need to think again. Here's why. Bigger networks come at a cost. They take longer to train"}, {"start": 55.12, "end": 60.64, "text": " and even worse, if we have networks that are too big, we bump into the problem of overfitting."}, {"start": 60.64, "end": 66.24, "text": " Overfitting is a phenomenon when a learning algorithm starts essentially memorizing the training"}, {"start": 66.24, "end": 71.67999999999999, "text": " data without actually doing the learning. As a result, its knowledge is not going to generalize"}, {"start": 71.67999999999999, "end": 78.0, "text": " for unseen data at all. Imagine a student in a school who has a tremendous aptitude in memorizing"}, {"start": 78.0, "end": 83.28, "text": " everything from the textbook. If the exam questions happen to be the same, this student will do"}, {"start": 83.28, "end": 89.6, "text": " extremely well, but in the case of even the slightest deviations, well, too bad. Even though people"}, {"start": 89.6, "end": 95.12, "text": " like to call this road learning, there is nothing about the whole process that resembles any kind"}, {"start": 95.12, "end": 100.72, "text": " of learning at all. A smaller neural network, a less knowledgeable student who has done their homework"}, {"start": 100.72, "end": 107.2, "text": " properly would do way way better. So this is overfitting, the bane of so many modern learning"}, {"start": 107.2, "end": 114.24000000000001, "text": " algorithms. It can be kind of defeated by using techniques like L1 and L2 regularization or dropout."}, {"start": 114.24000000000001, "end": 119.12, "text": " These often help, but none of them are silver bullets. If you would like to hear more about these,"}, {"start": 119.12, "end": 124.56, "text": " we've covered them in an earlier episode, actually two episodes. As always, the links are in the"}, {"start": 124.56, "end": 129.92000000000002, "text": " video description for the more curious fellow scholars out there. So the algorithm itself is"}, {"start": 129.92000000000002, "end": 136.0, "text": " learning, but for some reason we have to design their architecture by hand. As we discussed,"}, {"start": 136.0, "end": 141.68, "text": " some architectures, like some students, of course, significantly outperform other ones,"}, {"start": 141.68, "end": 146.72, "text": " and we are left to perform a lengthy trial and error to find the best ones by hand."}, {"start": 147.28, "end": 152.96, "text": " So speaking about learning algorithms, why don't we make them learn their own architectures?"}, {"start": 152.96, "end": 159.04, "text": " And this new algorithm is about architecture search that does exactly that. I note that this is by"}, {"start": 159.04, "end": 164.4, "text": " far not the first crack at this problem, but it definitely is a remarkable improvement over the"}, {"start": 164.4, "end": 170.0, "text": " state of the art. It represents the neural network architecture as an organism and makes it"}, {"start": 170.0, "end": 176.72, "text": " evolve via genetic programming. This is just as cool as you would think it is and not half as complex"}, {"start": 176.72, "end": 182.16, "text": " as you may imagine at first. We have an earlier episode on genetic algorithms. I wrote some source code"}, {"start": 182.16, "end": 186.48000000000002, "text": " as well, which is available free of charge for everyone. Make sure to have a look at the video"}, {"start": 186.48000000000002, "end": 191.92000000000002, "text": " description for more on that. You'll love it. In this chart, you can see the number of evolution"}, {"start": 191.92, "end": 198.07999999999998, "text": " steps on the horizontal x-axis and the performance of these evolved architectures over time on the"}, {"start": 198.07999999999998, "end": 204.72, "text": " vertical y-axis. Finally, after taking about one and a half days to perform these few thousand"}, {"start": 204.72, "end": 210.79999999999998, "text": " evolutionary steps, the best architectures found by this algorithm are only slightly inferior to the"}, {"start": 210.79999999999998, "end": 216.72, "text": " best existing neural networks for many classical datasets, which is bloody amazing. Please refer to"}, {"start": 216.72, "end": 222.16, "text": " the paper for details and comparisons against the state of the art neural networks and other architecture"}, {"start": 222.16, "end": 227.76, "text": " search approaches, there are lots of very easily readable results reported there. Note that this is"}, {"start": 227.76, "end": 233.44, "text": " still a preliminary work and uses hundreds of graphics cards in the process. However, if you"}, {"start": 233.44, "end": 239.52, "text": " remember how it went with AlphaGo, the computational costs were cut down by a factor of 10 within a"}, {"start": 239.52, "end": 245.44, "text": " little more than a year. And until that happens, we have learning algorithms that learn to optimize"}, {"start": 245.44, "end": 251.28, "text": " themselves. This sounds like science fiction. How cool is that? Thanks for watching and for your"}, {"start": 251.28, "end": 276.56, "text": " generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1zvohULpe_0
How Do Neural Networks See The World? Pt 2. | Two Minute Papers #211
The paper "Feature Visualization" is available here: https://distill.pub/2017/feature-visualization/ Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers One-time payments: PayPal: https://www.paypal.me/TwoMinutePapers Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh Distill journal: https://distill.pub/ Recommended for you: How Do Neural Networks See The World? https://www.youtube.com/watch?v=hBobYd8nNtQ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. This one is going to be a treat. As you know, all too well after watching at least a few episodes of this series, Neural Networks offer us amazingly powerful tools to defeat problems that we didn't stand a chance against for a long, long time. We are now in the Golden Age of AI and no business or field of science is going to remain unaffected by this revolution. However, this approach comes with its own disadvantage compared to previous handcrafted algorithms. It is harder to know what is really happening under the hood. That's also kind of the advantage of Neural Networks because they can deal with complexities that we humans are not built to comprehend. But still, it is always nice to peek within a Neural Network and see if it is trying to learn the correct concepts that are relevant to our application. Maybe later we'll be able to look into a Neural Network, learn what it is trying to do, simplify it, and create a more reliable handcrafted algorithm that mimics it. What's even more, maybe they will be able to write this piece of code by themselves. So clearly, there's lots of value to be had from the visualizations, however, this topic is way more complex than one would think at first. Earlier, we talked about a technique that we called activation maximization, which was about trying to find an input that makes a given neuron as excited as possible. Here you can see what several individual neurons have learned when I trained them to recognize wooden patterns. In this first layer, it is looking for colors. Then, in the second layer, some basic patterns emerge. As we look into the third layer, we see that it starts to recognize horizontal, vertical, and diagonal patterns, and in the fourth and fifth layers, it uses combinations of the previously seen features, and as you can see, beautiful, somewhat symmetric figures emerge. If you would like to see more on this, I put a link to a previous episode in the video description. Then, a follow-up work came for multifaceted neuron visualizations that unveiled even more beautiful and relevant visualizations, a good example was showing which neuron is responsible for recognizing groceries. A new distale article on this topic has recently appeared by Christopher Ola and his colleagues at Google. Distale is a journal that is about publishing clear explanations to common interesting phenomena in machine learning research. All their articles so far are beyond amazing, so make sure to have a look at this new journal as a whole, as always, the link is available in the video description. They usually include some web demos that you can also play with. I'll show you one in a moment. This article gives a nice rundown of recent works in optimization-based feature visualization. The optimization part can take place in a number of different ways, but it generally means that we start out with a noisy image and look to change this image to maximize the activation of a particular neuron. This means that we slowly morph this piece of noise into an image that provides us information on what the network has learned. It is indeed a powerful way to perform visualization, often more informative than just choosing the most exciting images for a neuron from the training database. It unveils exactly the information the neuron is looking for, not something that only correlates with that information. There is more about not only visualizing the neurons in isolation, but getting a more detailed understanding of the interactions between these neurons. After all, a neuron network produces an output as a combination of these neuron activations, so we might as well try to get a detailed look at how they interact. Different regularization techniques to guide the visualization process towards more informative results are also discussed. You can also play with some of these web demos, for instance, this one shows the neuron activations with respect to the learning rates. There is so much more in the article I urge you to read the whole thing. It doesn't take that long and it is a wondrous adventure into the imagination of neural networks. How cool is that? If you have enjoyed this episode, you can pick up some really cool perks on Patreon, like Early Access, voting on the order of the next few episodes, or getting your name in the video description as a key contributor. This also helps us make better videos in the future, and we also use part of these funds to empower research projects and conferences. Details are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir."}, {"start": 4.64, "end": 6.88, "text": " This one is going to be a treat."}, {"start": 6.88, "end": 11.4, "text": " As you know, all too well after watching at least a few episodes of this series, Neural"}, {"start": 11.4, "end": 16.6, "text": " Networks offer us amazingly powerful tools to defeat problems that we didn't stand a"}, {"start": 16.6, "end": 19.14, "text": " chance against for a long, long time."}, {"start": 19.14, "end": 24.48, "text": " We are now in the Golden Age of AI and no business or field of science is going to remain"}, {"start": 24.48, "end": 26.52, "text": " unaffected by this revolution."}, {"start": 26.52, "end": 31.64, "text": " However, this approach comes with its own disadvantage compared to previous handcrafted"}, {"start": 31.64, "end": 32.64, "text": " algorithms."}, {"start": 32.64, "end": 36.04, "text": " It is harder to know what is really happening under the hood."}, {"start": 36.04, "end": 41.08, "text": " That's also kind of the advantage of Neural Networks because they can deal with complexities"}, {"start": 41.08, "end": 44.4, "text": " that we humans are not built to comprehend."}, {"start": 44.4, "end": 49.519999999999996, "text": " But still, it is always nice to peek within a Neural Network and see if it is trying to"}, {"start": 49.519999999999996, "end": 53.32, "text": " learn the correct concepts that are relevant to our application."}, {"start": 53.32, "end": 58.84, "text": " Maybe later we'll be able to look into a Neural Network, learn what it is trying to do, simplify"}, {"start": 58.84, "end": 63.6, "text": " it, and create a more reliable handcrafted algorithm that mimics it."}, {"start": 63.6, "end": 68.48, "text": " What's even more, maybe they will be able to write this piece of code by themselves."}, {"start": 68.48, "end": 73.52, "text": " So clearly, there's lots of value to be had from the visualizations, however, this topic"}, {"start": 73.52, "end": 76.88, "text": " is way more complex than one would think at first."}, {"start": 76.88, "end": 81.52, "text": " Earlier, we talked about a technique that we called activation maximization, which was"}, {"start": 81.52, "end": 87.08, "text": " about trying to find an input that makes a given neuron as excited as possible."}, {"start": 87.08, "end": 91.96, "text": " Here you can see what several individual neurons have learned when I trained them to recognize"}, {"start": 91.96, "end": 93.44, "text": " wooden patterns."}, {"start": 93.44, "end": 96.19999999999999, "text": " In this first layer, it is looking for colors."}, {"start": 96.19999999999999, "end": 100.08, "text": " Then, in the second layer, some basic patterns emerge."}, {"start": 100.08, "end": 105.88, "text": " As we look into the third layer, we see that it starts to recognize horizontal, vertical,"}, {"start": 105.88, "end": 110.96, "text": " and diagonal patterns, and in the fourth and fifth layers, it uses combinations of the"}, {"start": 110.96, "end": 117.44, "text": " previously seen features, and as you can see, beautiful, somewhat symmetric figures emerge."}, {"start": 117.44, "end": 121.39999999999999, "text": " If you would like to see more on this, I put a link to a previous episode in the video"}, {"start": 121.39999999999999, "end": 122.39999999999999, "text": " description."}, {"start": 122.39999999999999, "end": 128.4, "text": " Then, a follow-up work came for multifaceted neuron visualizations that unveiled even more"}, {"start": 128.4, "end": 134.6, "text": " beautiful and relevant visualizations, a good example was showing which neuron is responsible"}, {"start": 134.6, "end": 136.35999999999999, "text": " for recognizing groceries."}, {"start": 136.36, "end": 142.20000000000002, "text": " A new distale article on this topic has recently appeared by Christopher Ola and his colleagues"}, {"start": 142.20000000000002, "end": 143.20000000000002, "text": " at Google."}, {"start": 143.20000000000002, "end": 148.20000000000002, "text": " Distale is a journal that is about publishing clear explanations to common interesting"}, {"start": 148.20000000000002, "end": 150.76000000000002, "text": " phenomena in machine learning research."}, {"start": 150.76000000000002, "end": 156.08, "text": " All their articles so far are beyond amazing, so make sure to have a look at this new journal"}, {"start": 156.08, "end": 159.84, "text": " as a whole, as always, the link is available in the video description."}, {"start": 159.84, "end": 162.96, "text": " They usually include some web demos that you can also play with."}, {"start": 162.96, "end": 164.68, "text": " I'll show you one in a moment."}, {"start": 164.68, "end": 170.6, "text": " This article gives a nice rundown of recent works in optimization-based feature visualization."}, {"start": 170.6, "end": 175.16, "text": " The optimization part can take place in a number of different ways, but it generally means"}, {"start": 175.16, "end": 181.04000000000002, "text": " that we start out with a noisy image and look to change this image to maximize the activation"}, {"start": 181.04000000000002, "end": 182.72, "text": " of a particular neuron."}, {"start": 182.72, "end": 188.04000000000002, "text": " This means that we slowly morph this piece of noise into an image that provides us information"}, {"start": 188.04000000000002, "end": 190.08, "text": " on what the network has learned."}, {"start": 190.08, "end": 196.12, "text": " It is indeed a powerful way to perform visualization, often more informative than just choosing the"}, {"start": 196.12, "end": 200.04000000000002, "text": " most exciting images for a neuron from the training database."}, {"start": 200.04000000000002, "end": 205.52, "text": " It unveils exactly the information the neuron is looking for, not something that only correlates"}, {"start": 205.52, "end": 206.52, "text": " with that information."}, {"start": 206.52, "end": 211.84, "text": " There is more about not only visualizing the neurons in isolation, but getting a more"}, {"start": 211.84, "end": 216.04000000000002, "text": " detailed understanding of the interactions between these neurons."}, {"start": 216.04, "end": 221.35999999999999, "text": " After all, a neuron network produces an output as a combination of these neuron activations,"}, {"start": 221.35999999999999, "end": 225.68, "text": " so we might as well try to get a detailed look at how they interact."}, {"start": 225.68, "end": 231.0, "text": " Different regularization techniques to guide the visualization process towards more informative"}, {"start": 231.0, "end": 233.12, "text": " results are also discussed."}, {"start": 233.12, "end": 237.6, "text": " You can also play with some of these web demos, for instance, this one shows the neuron"}, {"start": 237.6, "end": 240.35999999999999, "text": " activations with respect to the learning rates."}, {"start": 240.35999999999999, "end": 244.48, "text": " There is so much more in the article I urge you to read the whole thing."}, {"start": 244.48, "end": 249.28, "text": " It doesn't take that long and it is a wondrous adventure into the imagination of neural"}, {"start": 249.28, "end": 250.28, "text": " networks."}, {"start": 250.28, "end": 252.0, "text": " How cool is that?"}, {"start": 252.0, "end": 256.28, "text": " If you have enjoyed this episode, you can pick up some really cool perks on Patreon,"}, {"start": 256.28, "end": 261.15999999999997, "text": " like Early Access, voting on the order of the next few episodes, or getting your name"}, {"start": 261.15999999999997, "end": 263.76, "text": " in the video description as a key contributor."}, {"start": 263.76, "end": 268.76, "text": " This also helps us make better videos in the future, and we also use part of these funds"}, {"start": 268.76, "end": 271.76, "text": " to empower research projects and conferences."}, {"start": 271.76, "end": 273.84, "text": " Details are available in the video description."}, {"start": 273.84, "end": 277.67999999999995, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=M_eaS7X-mIw
Meta Learning Shared Hierarchies | Two Minute Papers #210
The paper "Meta Learning Shared Hierarchies" and its source code is available here: https://arxiv.org/abs/1710.09767 https://github.com/openai/mlsh A video from Robert Miles: https://www.youtube.com/watch?v=MUVbqQ3STFA We have been experimenting with opening a bitcoin wallet. Let us know if it's working properly and thank you very much for your support! Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1804496/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Reinforcement learning is a technique where we have a virtual creature that tries to learn an optimal set of actions to maximize a reward in a changing environment. Playing video games, helicopter control, and even optimizing light transport simulations are among the more awesome example use cases for it. But if we train a reinforcement learner from scratch, we'll see that it typically starts out with a brute force search in the space of the simplest, lowest-level actions. This not only leads to crazy behavior early on, but is also highly ineffective, requires way more experience than humans do, and the obtained knowledge cannot be reused for similar tasks. It can learn the game it was trained on, often even on a superhuman level, but if we needed to function in a new environment, all this previous knowledge has to be thrown away. This algorithm is very much like how humans learn. It breaks down a big and complex task into sequences of smaller actions. These are called sub-polices and can be shared between tasks. Learning to walk and crawl are excellent examples of that and will likely be reused for a variety of different problems and will lead to rapid learning on new unseen tasks even if they differ significantly from the previously seen problems. Not only that, but the search space over sub-polices can easily be 100 or more times smaller than the original search space of all possible actions, therefore this kind of search is way more efficient than previous techniques. Of course, creating a good selection of sub-polices is challenging, because they have to be robust enough to be helpful on many possible tasks, but not too specific to one problem, otherwise they lose their utility. A few episodes ago, we mentioned a related technique by the name Neural Task Programming, and it seems that this one is capable of generalization not only over different variations of the same task, but across different tasks as well. These ends were trained to traverse several different mazes one after another and quickly realized that the basic movement directions should be retained. Creating more general learning algorithms is one of the holy grail problems of AI research and this one seems to be a proper, proper step towards defeating it. We are not there yet, but it's hard not to be optimistic with this incredible rate of progress each year. Really excited to see how this area improves over the next few months. The source code of this project is also available. Oh, and before we go, make sure to check out the channel of Robert Miles, who makes excellent videos about AI, and I'd recommend starting with one of his videos that you are objectively guaranteed to enjoy. If you wish to find out why, you'll see the link in the video description or just click the cat picture appearing here on the screen in a moment. If you indeed enjoyed it, make sure to subscribe to his channel. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.64, "end": 9.6, "text": " Reinforcement learning is a technique where we have a virtual creature that tries to learn"}, {"start": 9.6, "end": 14.68, "text": " an optimal set of actions to maximize a reward in a changing environment."}, {"start": 14.68, "end": 19.92, "text": " Playing video games, helicopter control, and even optimizing light transport simulations"}, {"start": 19.92, "end": 23.32, "text": " are among the more awesome example use cases for it."}, {"start": 23.32, "end": 28.28, "text": " But if we train a reinforcement learner from scratch, we'll see that it typically starts"}, {"start": 28.28, "end": 33.84, "text": " out with a brute force search in the space of the simplest, lowest-level actions."}, {"start": 33.84, "end": 39.160000000000004, "text": " This not only leads to crazy behavior early on, but is also highly ineffective, requires"}, {"start": 39.160000000000004, "end": 45.16, "text": " way more experience than humans do, and the obtained knowledge cannot be reused for similar"}, {"start": 45.16, "end": 46.16, "text": " tasks."}, {"start": 46.16, "end": 50.66, "text": " It can learn the game it was trained on, often even on a superhuman level, but if we"}, {"start": 50.66, "end": 55.36, "text": " needed to function in a new environment, all this previous knowledge has to be thrown"}, {"start": 55.36, "end": 56.36, "text": " away."}, {"start": 56.36, "end": 59.44, "text": " This algorithm is very much like how humans learn."}, {"start": 59.44, "end": 64.68, "text": " It breaks down a big and complex task into sequences of smaller actions."}, {"start": 64.68, "end": 69.36, "text": " These are called sub-polices and can be shared between tasks."}, {"start": 69.36, "end": 74.6, "text": " Learning to walk and crawl are excellent examples of that and will likely be reused for a variety"}, {"start": 74.6, "end": 80.2, "text": " of different problems and will lead to rapid learning on new unseen tasks even if they"}, {"start": 80.2, "end": 84.16, "text": " differ significantly from the previously seen problems."}, {"start": 84.16, "end": 90.88, "text": " Not only that, but the search space over sub-polices can easily be 100 or more times smaller"}, {"start": 90.88, "end": 96.47999999999999, "text": " than the original search space of all possible actions, therefore this kind of search is way"}, {"start": 96.47999999999999, "end": 98.96, "text": " more efficient than previous techniques."}, {"start": 98.96, "end": 103.82, "text": " Of course, creating a good selection of sub-polices is challenging, because they have to be"}, {"start": 103.82, "end": 109.72, "text": " robust enough to be helpful on many possible tasks, but not too specific to one problem,"}, {"start": 109.72, "end": 111.92, "text": " otherwise they lose their utility."}, {"start": 111.92, "end": 117.48, "text": " A few episodes ago, we mentioned a related technique by the name Neural Task Programming,"}, {"start": 117.48, "end": 123.52, "text": " and it seems that this one is capable of generalization not only over different variations of the"}, {"start": 123.52, "end": 127.12, "text": " same task, but across different tasks as well."}, {"start": 127.12, "end": 132.36, "text": " These ends were trained to traverse several different mazes one after another and quickly"}, {"start": 132.36, "end": 136.56, "text": " realized that the basic movement directions should be retained."}, {"start": 136.56, "end": 141.36, "text": " Creating more general learning algorithms is one of the holy grail problems of AI research"}, {"start": 141.36, "end": 145.52, "text": " and this one seems to be a proper, proper step towards defeating it."}, {"start": 145.52, "end": 149.88000000000002, "text": " We are not there yet, but it's hard not to be optimistic with this incredible rate of"}, {"start": 149.88000000000002, "end": 151.64000000000001, "text": " progress each year."}, {"start": 151.64000000000001, "end": 155.48000000000002, "text": " Really excited to see how this area improves over the next few months."}, {"start": 155.48000000000002, "end": 158.24, "text": " The source code of this project is also available."}, {"start": 158.24, "end": 163.88000000000002, "text": " Oh, and before we go, make sure to check out the channel of Robert Miles, who makes excellent"}, {"start": 163.88000000000002, "end": 169.92000000000002, "text": " videos about AI, and I'd recommend starting with one of his videos that you are objectively"}, {"start": 169.92, "end": 171.64, "text": " guaranteed to enjoy."}, {"start": 171.64, "end": 175.95999999999998, "text": " If you wish to find out why, you'll see the link in the video description or just click"}, {"start": 175.95999999999998, "end": 179.27999999999997, "text": " the cat picture appearing here on the screen in a moment."}, {"start": 179.27999999999997, "end": 182.6, "text": " If you indeed enjoyed it, make sure to subscribe to his channel."}, {"start": 182.6, "end": 202.72, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6DVng5JVuhI
Image Matting With Deep Neural Networks | Two Minute Papers #209
The paper "Deep Image Matting" and a (seemingly) unofficial implementation by someone else is available here: https://sites.google.com/view/deepimagematting https://github.com/Joker316701882/Deep-Image-Matting Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://flic.kr/p/95VjEC Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karajola Ifehir. Image Matting is the process of taking an input image and separating its foreground from the background. It is an important preliminary step for creating visual effects where we cut an actor out from green screen footage and change the background to something else. And Image Matting is also an important part of these new, awesome portrait mode selfies where the background looks blurry and out of focus for a neat, artistic effect. To perform this properly, we need to know how to separate the foreground from the background. Matting human hair and telling accurately which hair strand is the foreground and which is the background is one of the more difficult parts of this problem. This is also the reason for many of the failure cases of the portrait mode photos made with the new iPhone and Pixel cameras. The input of this problem formulation is a colored image or video and the output is an alpha-matte where white and lighter colors encode the foreground and darker colors are assigned to the background. After this step, it is easy to separate and cut out the different layers and selectively replace some of them. Traditional techniques rely on useful heuristics like assuming that the foreground and the background are dominated by different colors. This is useful, but of course it's not always true. And clearly, we would get the best results if we had a human artist creating these alpha mats. Of course, this is usually prohibitively expensive for real world use and costs a ton of time and money. The main reason why humans are successful at this is that they have an understanding of the objects in the scene. So perhaps we could come up with a neural network-based learning solution that could replicate this ideal case. The first part of this algorithm is a deep neural network that takes images as an input and outputs an alpha-matte which was trained on close to 50,000 input output pairs. So here comes the second refinement stage where we take the output mat from the first step and use a more shallow neural network to further refine the edges and sharper details. There are a ton of comparisons in the paper and we are going to have a look at some of them and as you can see, it works remarkably well for difficult situations where many tiny hair strands are to be matted properly. If you look closely here, you can also see the minute differences between the results of the raw and refined steps. And it is shown that the refined version is more similar to the ground truth solution and is abbreviated with GT here. By the way, creating a dataset with tons of ground truth data is also a huge endeavor in and of itself. So thank you very much for the folks at alpha-matting.com for creating this dataset and you can see how important this kind of work is to make it easier to compare state-of-the-art research works more easily. Adobe was part of this research project so if everything goes well, we can soon expect such a feature to appear in their products. Also, if you are interested, we also have some nice two-minute paper shirts for your enjoyment. If you are located in the US, check two-minutepapers.com and for worldwide shipping, check the video description for the links. All photos of you wearing them are appreciated. Plus, scholarly points if it depicts you reading a paper. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Karajola Ifehir."}, {"start": 4.8, "end": 9.8, "text": " Image Matting is the process of taking an input image and separating its foreground from"}, {"start": 9.8, "end": 10.8, "text": " the background."}, {"start": 10.8, "end": 15.8, "text": " It is an important preliminary step for creating visual effects where we cut an actor out"}, {"start": 15.8, "end": 19.6, "text": " from green screen footage and change the background to something else."}, {"start": 19.6, "end": 24.32, "text": " And Image Matting is also an important part of these new, awesome portrait mode selfies"}, {"start": 24.32, "end": 28.76, "text": " where the background looks blurry and out of focus for a neat, artistic effect."}, {"start": 28.76, "end": 33.96, "text": " To perform this properly, we need to know how to separate the foreground from the background."}, {"start": 33.96, "end": 38.96, "text": " Matting human hair and telling accurately which hair strand is the foreground and which"}, {"start": 38.96, "end": 42.6, "text": " is the background is one of the more difficult parts of this problem."}, {"start": 42.6, "end": 47.36, "text": " This is also the reason for many of the failure cases of the portrait mode photos made"}, {"start": 47.36, "end": 49.88, "text": " with the new iPhone and Pixel cameras."}, {"start": 49.88, "end": 55.08, "text": " The input of this problem formulation is a colored image or video and the output is an"}, {"start": 55.08, "end": 60.6, "text": " alpha-matte where white and lighter colors encode the foreground and darker colors are"}, {"start": 60.6, "end": 62.64, "text": " assigned to the background."}, {"start": 62.64, "end": 67.08, "text": " After this step, it is easy to separate and cut out the different layers and selectively"}, {"start": 67.08, "end": 68.88, "text": " replace some of them."}, {"start": 68.88, "end": 74.08, "text": " Traditional techniques rely on useful heuristics like assuming that the foreground and the background"}, {"start": 74.08, "end": 76.36, "text": " are dominated by different colors."}, {"start": 76.36, "end": 79.56, "text": " This is useful, but of course it's not always true."}, {"start": 79.56, "end": 84.28, "text": " And clearly, we would get the best results if we had a human artist creating these alpha"}, {"start": 84.28, "end": 85.28, "text": " mats."}, {"start": 85.28, "end": 90.36, "text": " Of course, this is usually prohibitively expensive for real world use and costs a ton of time"}, {"start": 90.36, "end": 91.36, "text": " and money."}, {"start": 91.36, "end": 95.52, "text": " The main reason why humans are successful at this is that they have an understanding of"}, {"start": 95.52, "end": 97.12, "text": " the objects in the scene."}, {"start": 97.12, "end": 101.88, "text": " So perhaps we could come up with a neural network-based learning solution that could replicate this"}, {"start": 101.88, "end": 103.12, "text": " ideal case."}, {"start": 103.12, "end": 107.88, "text": " The first part of this algorithm is a deep neural network that takes images as an input"}, {"start": 107.88, "end": 113.56, "text": " and outputs an alpha-matte which was trained on close to 50,000 input output pairs."}, {"start": 113.56, "end": 118.56, "text": " So here comes the second refinement stage where we take the output mat from the first step"}, {"start": 118.56, "end": 124.24000000000001, "text": " and use a more shallow neural network to further refine the edges and sharper details."}, {"start": 124.24000000000001, "end": 128.52, "text": " There are a ton of comparisons in the paper and we are going to have a look at some of them"}, {"start": 128.52, "end": 133.4, "text": " and as you can see, it works remarkably well for difficult situations where many tiny"}, {"start": 133.4, "end": 136.16, "text": " hair strands are to be matted properly."}, {"start": 136.16, "end": 140.4, "text": " If you look closely here, you can also see the minute differences between the results"}, {"start": 140.4, "end": 142.64000000000001, "text": " of the raw and refined steps."}, {"start": 142.64, "end": 147.0, "text": " And it is shown that the refined version is more similar to the ground truth solution"}, {"start": 147.0, "end": 149.2, "text": " and is abbreviated with GT here."}, {"start": 149.2, "end": 154.32, "text": " By the way, creating a dataset with tons of ground truth data is also a huge endeavor in and"}, {"start": 154.32, "end": 155.32, "text": " of itself."}, {"start": 155.32, "end": 160.95999999999998, "text": " So thank you very much for the folks at alpha-matting.com for creating this dataset and you can see how"}, {"start": 160.95999999999998, "end": 166.2, "text": " important this kind of work is to make it easier to compare state-of-the-art research works"}, {"start": 166.2, "end": 167.2, "text": " more easily."}, {"start": 167.2, "end": 171.88, "text": " Adobe was part of this research project so if everything goes well, we can soon expect"}, {"start": 171.88, "end": 174.6, "text": " such a feature to appear in their products."}, {"start": 174.6, "end": 179.72, "text": " Also, if you are interested, we also have some nice two-minute paper shirts for your enjoyment."}, {"start": 179.72, "end": 185.2, "text": " If you are located in the US, check two-minutepapers.com and for worldwide shipping, check the video"}, {"start": 185.2, "end": 186.68, "text": " description for the links."}, {"start": 186.68, "end": 189.72, "text": " All photos of you wearing them are appreciated."}, {"start": 189.72, "end": 193.56, "text": " Plus, scholarly points if it depicts you reading a paper."}, {"start": 193.56, "end": 211.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=NEscK5RCtlo
Terrain Generation With Deep Learning | Two Minute Papers #208
The paper "Interactive Example-Based Terrain Authoring with Conditional Generative Adversarial Networks" is available here: https://hal.archives-ouvertes.fr/hal-01583706/file/tog.pdf We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. We have recently witnessed the emergence of neural network-based techniques that are able to synthesize all sorts of images. Our previous episode was about Nvidia's algorithm that created high-resolution images of imaginary celebrities that was a really cool application of generative adversarial networks. This architecture means that we have a pair of neural networks, one that learns to generate new images, and the other learns to tell a fake image from a real one. As they compete against each other, they get better and better without any human interaction. So we can clearly use them to create 2D images, but why stop there? Why not use this technique, for instance, to create assets for digital media? So instead of 2D images, let's try to adapt these networks to generate high-resolution 3D models of terrains that we can use to populate a virtual world. Both computer games and the motion picture industry could benefit greatly from such a tool. This process is typically done via procedural generation, which is basically a sort of guided random terrain generation. Here, we can have a more direct effect on the output without putting in tens of hours of work to get the job done. In this first training step, this technique learns how an image of a terrain corresponds to input drawings. Then we will be able to sketch a draft of a landscape with rivers, ridges, valleys, and the algorithm will output a high-quality model of the terrain itself. During this process, we can have a look at the current output and refine our drawings in the meantime, leading to a super-efficient process where we can go from a thought to high-quality final results within a few seconds without being bogged down with the technical details. Once more, it can also not only deal with erased subregions, but it can also automatically fill them with sensible information to save time for us. What an outstanding convenience feature. And the algorithm can also perform physical manipulations like erosion to the final results. After the training for the erosion step is done, the computational cost is practically zero. For instance, running an erosion simulator on this piece of data would take around 40 seconds, where the neural network can do it in 25 milliseconds. The full simulation would almost be a minute where the network can mimic its results practically instantaneously. A limitation of this technique is that if the input is too sparse, unpleasant grid artifacts may appear. There are tons of more cool features in the paper, make sure to have a look as always it is available in the video description. This is a really well thought out and well-presented work that I expect to be a true powerhouse, for terrain authoring in the future. And in the meantime, we have reached 100,000 subscribers. A hundred thousand fellow scholars. Wow! This is absolutely amazing and honestly, I never thought that this would ever happen. So, happy paperversary. Thank you very much for coming along on this journey of science and I am very happy to see that the series brings joy and learning to more people than ever. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.44, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.44, "end": 8.76, "text": " We have recently witnessed the emergence of neural network-based techniques that are able"}, {"start": 8.76, "end": 11.24, "text": " to synthesize all sorts of images."}, {"start": 11.24, "end": 16.92, "text": " Our previous episode was about Nvidia's algorithm that created high-resolution images of imaginary"}, {"start": 16.92, "end": 22.04, "text": " celebrities that was a really cool application of generative adversarial networks."}, {"start": 22.04, "end": 26.68, "text": " This architecture means that we have a pair of neural networks, one that learns to generate"}, {"start": 26.68, "end": 31.36, "text": " new images, and the other learns to tell a fake image from a real one."}, {"start": 31.36, "end": 36.2, "text": " As they compete against each other, they get better and better without any human interaction."}, {"start": 36.2, "end": 40.6, "text": " So we can clearly use them to create 2D images, but why stop there?"}, {"start": 40.6, "end": 45.2, "text": " Why not use this technique, for instance, to create assets for digital media?"}, {"start": 45.2, "end": 50.16, "text": " So instead of 2D images, let's try to adapt these networks to generate high-resolution"}, {"start": 50.16, "end": 55.2, "text": " 3D models of terrains that we can use to populate a virtual world."}, {"start": 55.2, "end": 60.040000000000006, "text": " Both computer games and the motion picture industry could benefit greatly from such a tool."}, {"start": 60.040000000000006, "end": 65.36, "text": " This process is typically done via procedural generation, which is basically a sort of"}, {"start": 65.36, "end": 67.48, "text": " guided random terrain generation."}, {"start": 67.48, "end": 72.28, "text": " Here, we can have a more direct effect on the output without putting in tens of hours"}, {"start": 72.28, "end": 74.24000000000001, "text": " of work to get the job done."}, {"start": 74.24000000000001, "end": 79.28, "text": " In this first training step, this technique learns how an image of a terrain corresponds"}, {"start": 79.28, "end": 81.16, "text": " to input drawings."}, {"start": 81.16, "end": 87.08, "text": " Then we will be able to sketch a draft of a landscape with rivers, ridges, valleys,"}, {"start": 87.08, "end": 91.8, "text": " and the algorithm will output a high-quality model of the terrain itself."}, {"start": 91.8, "end": 95.92, "text": " During this process, we can have a look at the current output and refine our drawings"}, {"start": 95.92, "end": 101.03999999999999, "text": " in the meantime, leading to a super-efficient process where we can go from a thought to"}, {"start": 101.03999999999999, "end": 106.52, "text": " high-quality final results within a few seconds without being bogged down with the technical"}, {"start": 106.52, "end": 107.67999999999999, "text": " details."}, {"start": 107.68, "end": 113.72000000000001, "text": " Once more, it can also not only deal with erased subregions, but it can also automatically"}, {"start": 113.72000000000001, "end": 117.32000000000001, "text": " fill them with sensible information to save time for us."}, {"start": 117.32000000000001, "end": 120.16000000000001, "text": " What an outstanding convenience feature."}, {"start": 120.16000000000001, "end": 126.16000000000001, "text": " And the algorithm can also perform physical manipulations like erosion to the final results."}, {"start": 126.16000000000001, "end": 130.72, "text": " After the training for the erosion step is done, the computational cost is practically"}, {"start": 130.72, "end": 131.72, "text": " zero."}, {"start": 131.72, "end": 137.20000000000002, "text": " For instance, running an erosion simulator on this piece of data would take around 40 seconds,"}, {"start": 137.2, "end": 141.44, "text": " where the neural network can do it in 25 milliseconds."}, {"start": 141.44, "end": 146.48, "text": " The full simulation would almost be a minute where the network can mimic its results practically"}, {"start": 146.48, "end": 148.11999999999998, "text": " instantaneously."}, {"start": 148.11999999999998, "end": 153.44, "text": " A limitation of this technique is that if the input is too sparse, unpleasant grid artifacts"}, {"start": 153.44, "end": 154.44, "text": " may appear."}, {"start": 154.44, "end": 159.07999999999998, "text": " There are tons of more cool features in the paper, make sure to have a look as always"}, {"start": 159.07999999999998, "end": 161.28, "text": " it is available in the video description."}, {"start": 161.28, "end": 167.12, "text": " This is a really well thought out and well-presented work that I expect to be a true powerhouse,"}, {"start": 167.12, "end": 169.24, "text": " for terrain authoring in the future."}, {"start": 169.24, "end": 173.32, "text": " And in the meantime, we have reached 100,000 subscribers."}, {"start": 173.32, "end": 175.68, "text": " A hundred thousand fellow scholars."}, {"start": 175.68, "end": 176.6, "text": " Wow!"}, {"start": 176.6, "end": 181.68, "text": " This is absolutely amazing and honestly, I never thought that this would ever happen."}, {"start": 181.68, "end": 183.68, "text": " So, happy paperversary."}, {"start": 183.68, "end": 187.8, "text": " Thank you very much for coming along on this journey of science and I am very happy to see"}, {"start": 187.8, "end": 191.8, "text": " that the series brings joy and learning to more people than ever."}, {"start": 191.8, "end": 198.8, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=VrgYtFhVGmg
NVIDIA's AI Dreams Up Imaginary Celebrities! 👨‍⚖️
The paper "Progressive Growing of GANs for Improved Quality, Stability, and Variation" and its source code is available here: http://research.nvidia.com/publication/2017-10_Progressive-Growing-of Our Patreon page with some really cool perks: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Raul Araújo da Silva, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2911332/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Hold on to your papers because these results are completely out of this world. You'll soon see why. In this work, high-resolution images of imaginary celebrities are generated via a generative adversarial network. This is an architecture where two neural networks battle each other. The generator network is the artist who tries to create convincing, real-looking images and the discriminator network, the critic, tries to tell a fake image from a real one. The artist learns from the feedback of the critic and will improve itself to come up with better quality images, and in the meantime, the critic also develops a sharp eye for fake images. These two adversaries push each other until they are both adept at their tasks. A classical drawback of this architecture is that it is typically extremely slow to train, and these networks are often quite shallow, which means that we get low-resolution images that are devoid of sharp details. However, as you can see here, these are high-resolution images with tons of details. So, how is that possible? So here comes the solution from scientists at NVIDIA. Initially, they start out with tiny shallow neural networks for both the artist and the critic, and as time goes by, both of these neural networks are progressively grown. They get deeper and deeper over time. This way, the training process is more stable than using deeper neural networks from scratch. It not only generates pictures, but it can also compute high-resolution intermediate images via latent space interpolation. It can also learn object categories from a bunch of training data and generate new samples. And if you take a look at the roster of scientists on this project, you will see that they are computer graphics researchers who recently set foot in the world of machine learning. And man, do they know their stuff and how to present a piece of work? And now comes something that is the absolute most important part of devaluation that should be a must for every single paper in this area. These neural networks were trained on a bunch of images of celebrities and are now generating new ones. However, if all we are shown is a new image, we don't know how close it is to the closest image in the training set. If the network is severely overfitting, it would essentially copy-paste samples from there, like a student in class who hasn't learned a single thing, just memorize the textbook. Actually, what is even worse is that this would mean that the worst learning algorithm that hasn't learned anything, but memorize the whole database would look the best. That's not useful knowledge. And here, you see the nearest neighbors, the images that are the closest in this database to the newly synthesized images. It shows really well that the AI has learned the concept of a human face extremely well and can synthesize convincing looking new images that are not just copy-pasted from the training set. The source code, pre-trained network, and one hour of imaginary celebrities are also available in the description, check them out. Premium quality service right there. And if you feel that eight of these videos a month is worth a dollar, please consider supporting us on Patreon. You can also get really cool additional perks like early access, and it helps us make better videos, grow, and tell these incredible stories to a larger audience. Details are available in the description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.48, "end": 9.08, "text": " Hold on to your papers because these results are completely out of this world."}, {"start": 9.08, "end": 10.56, "text": " You'll soon see why."}, {"start": 10.56, "end": 18.16, "text": " In this work, high-resolution images of imaginary celebrities are generated via a generative adversarial network."}, {"start": 18.16, "end": 21.72, "text": " This is an architecture where two neural networks battle each other."}, {"start": 21.72, "end": 27.240000000000002, "text": " The generator network is the artist who tries to create convincing, real-looking images"}, {"start": 27.24, "end": 32.6, "text": " and the discriminator network, the critic, tries to tell a fake image from a real one."}, {"start": 32.6, "end": 38.96, "text": " The artist learns from the feedback of the critic and will improve itself to come up with better quality images,"}, {"start": 38.96, "end": 44.04, "text": " and in the meantime, the critic also develops a sharp eye for fake images."}, {"start": 44.04, "end": 49.28, "text": " These two adversaries push each other until they are both adept at their tasks."}, {"start": 49.28, "end": 54.4, "text": " A classical drawback of this architecture is that it is typically extremely slow to train,"}, {"start": 54.4, "end": 62.16, "text": " and these networks are often quite shallow, which means that we get low-resolution images that are devoid of sharp details."}, {"start": 62.16, "end": 67.52, "text": " However, as you can see here, these are high-resolution images with tons of details."}, {"start": 67.52, "end": 69.36, "text": " So, how is that possible?"}, {"start": 69.36, "end": 72.56, "text": " So here comes the solution from scientists at NVIDIA."}, {"start": 72.56, "end": 78.4, "text": " Initially, they start out with tiny shallow neural networks for both the artist and the critic,"}, {"start": 78.4, "end": 83.08, "text": " and as time goes by, both of these neural networks are progressively grown."}, {"start": 83.08, "end": 85.12, "text": " They get deeper and deeper over time."}, {"start": 85.12, "end": 90.36, "text": " This way, the training process is more stable than using deeper neural networks from scratch."}, {"start": 90.36, "end": 98.2, "text": " It not only generates pictures, but it can also compute high-resolution intermediate images via latent space interpolation."}, {"start": 98.2, "end": 103.88, "text": " It can also learn object categories from a bunch of training data and generate new samples."}, {"start": 103.88, "end": 107.28, "text": " And if you take a look at the roster of scientists on this project,"}, {"start": 107.28, "end": 113.4, "text": " you will see that they are computer graphics researchers who recently set foot in the world of machine learning."}, {"start": 113.4, "end": 117.52, "text": " And man, do they know their stuff and how to present a piece of work?"}, {"start": 117.52, "end": 122.4, "text": " And now comes something that is the absolute most important part of devaluation"}, {"start": 122.4, "end": 125.84, "text": " that should be a must for every single paper in this area."}, {"start": 125.84, "end": 130.08, "text": " These neural networks were trained on a bunch of images of celebrities"}, {"start": 130.08, "end": 132.4, "text": " and are now generating new ones."}, {"start": 132.4, "end": 138.96, "text": " However, if all we are shown is a new image, we don't know how close it is to the closest image in the training set."}, {"start": 138.96, "end": 144.16, "text": " If the network is severely overfitting, it would essentially copy-paste samples from there,"}, {"start": 144.16, "end": 149.12, "text": " like a student in class who hasn't learned a single thing, just memorize the textbook."}, {"start": 149.12, "end": 155.52, "text": " Actually, what is even worse is that this would mean that the worst learning algorithm that hasn't learned anything,"}, {"start": 155.52, "end": 158.68, "text": " but memorize the whole database would look the best."}, {"start": 158.68, "end": 160.56, "text": " That's not useful knowledge."}, {"start": 160.56, "end": 167.36, "text": " And here, you see the nearest neighbors, the images that are the closest in this database to the newly synthesized images."}, {"start": 167.36, "end": 172.8, "text": " It shows really well that the AI has learned the concept of a human face extremely well"}, {"start": 172.8, "end": 178.48000000000002, "text": " and can synthesize convincing looking new images that are not just copy-pasted from the training set."}, {"start": 178.48000000000002, "end": 186.24, "text": " The source code, pre-trained network, and one hour of imaginary celebrities are also available in the description, check them out."}, {"start": 186.24, "end": 188.96, "text": " Premium quality service right there."}, {"start": 188.96, "end": 192.88, "text": " And if you feel that eight of these videos a month is worth a dollar,"}, {"start": 192.88, "end": 195.44, "text": " please consider supporting us on Patreon."}, {"start": 195.44, "end": 199.20000000000002, "text": " You can also get really cool additional perks like early access,"}, {"start": 199.20000000000002, "end": 205.28, "text": " and it helps us make better videos, grow, and tell these incredible stories to a larger audience."}, {"start": 205.28, "end": 207.44, "text": " Details are available in the description."}, {"start": 207.44, "end": 219.92, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Lcxz6dtYjI4
Generalizing AI With Neural Task Programming | Two Minute Papers #206
The paper "Neural Task Programming: Learning to Generalize Across Hierarchical Tasks" is available here: https://stanfordvl.github.io/ntp/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Robin Graham, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2137333/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. One of the holy grair problems of machine learning research is to achieve artificial general intelligence, AGI in short. Deep Blue was able to defeat the genius Kasperov in chess, but it was unable to tell us what the time was. Algorithms of this type we often refer to as weak AI, or narrow AI, a technique that excels, or is maybe even on a superhuman level at a task, but has zero or no knowledge about anything else. A key to extend these algorithms would be to design them in a way that their knowledge generalizes well to other problems. This is what we call transfer learning, and this collaboration between the Stanford AI lab and Kaltek goes by the name Neural Task Programming and tries to tackle this problem. A solution to practically any problem we are trying to solve can be written as a series of tasks. These are typically complex actions, like cleaning a table or performing a backflip that are difficult to transfer to a different problem. This technique is a bit like divide and conquer type algorithms that aggressively try to decompose big difficult tasks into smaller, more manageable pieces. The smaller and easier to understand the pieces are, the more reusable they are, and the better they generalize. Let's have a look at an example. For instance, in a problem where we need to pick and place objects, these series of tasks can be decomposed into picking and placing. These can be further diced into a series of even smaller tasks such as gripping, moving, and releasing actions. However, if the learning takes place like this, we can now specify different variations of these tasks, and the algorithm will quickly understand how to adapt the structure of these small tasks to efficiently solve new problems. The new algorithm generalizes really well for tasks with different lengths, topologies, and changing objectives. If you take a look at the paper, you'll also find some more information on adversarial dynamics, which lists some problem variants where a really unpleasant adversary pushes things around on the table from time to time to mess with the program, and there are some results that show that the algorithm is able to recover from these failure states quite well. Really cool. Now, please don't take this as a complete solution for AGI, because it is a fantastic piece of work, but it's definitely not that. However, it may be a valuable puzzle piece to build towards the final solution. This is Research. We advance one step at a time. Man, what an amazing time to be alive. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.08, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.08, "end": 10.4, "text": " One of the holy grair problems of machine learning research is to achieve artificial general intelligence,"}, {"start": 10.4, "end": 15.68, "text": " AGI in short. Deep Blue was able to defeat the genius Kasperov in chess,"}, {"start": 15.68, "end": 18.400000000000002, "text": " but it was unable to tell us what the time was."}, {"start": 18.400000000000002, "end": 23.28, "text": " Algorithms of this type we often refer to as weak AI, or narrow AI,"}, {"start": 23.28, "end": 28.32, "text": " a technique that excels, or is maybe even on a superhuman level at a task,"}, {"start": 28.32, "end": 31.6, "text": " but has zero or no knowledge about anything else."}, {"start": 31.6, "end": 36.480000000000004, "text": " A key to extend these algorithms would be to design them in a way that their knowledge"}, {"start": 36.480000000000004, "end": 41.04, "text": " generalizes well to other problems. This is what we call transfer learning,"}, {"start": 41.04, "end": 45.84, "text": " and this collaboration between the Stanford AI lab and Kaltek goes by the name"}, {"start": 45.84, "end": 49.519999999999996, "text": " Neural Task Programming and tries to tackle this problem."}, {"start": 49.519999999999996, "end": 55.44, "text": " A solution to practically any problem we are trying to solve can be written as a series of tasks."}, {"start": 55.44, "end": 60.48, "text": " These are typically complex actions, like cleaning a table or performing a backflip"}, {"start": 60.48, "end": 63.12, "text": " that are difficult to transfer to a different problem."}, {"start": 63.12, "end": 68.72, "text": " This technique is a bit like divide and conquer type algorithms that aggressively try to decompose"}, {"start": 68.72, "end": 72.88, "text": " big difficult tasks into smaller, more manageable pieces."}, {"start": 72.88, "end": 77.44, "text": " The smaller and easier to understand the pieces are, the more reusable they are,"}, {"start": 77.44, "end": 81.12, "text": " and the better they generalize. Let's have a look at an example."}, {"start": 81.12, "end": 86.08, "text": " For instance, in a problem where we need to pick and place objects, these series of tasks"}, {"start": 86.08, "end": 92.88000000000001, "text": " can be decomposed into picking and placing. These can be further diced into a series of even smaller"}, {"start": 92.88000000000001, "end": 98.96000000000001, "text": " tasks such as gripping, moving, and releasing actions. However, if the learning takes place like"}, {"start": 98.96000000000001, "end": 104.48, "text": " this, we can now specify different variations of these tasks, and the algorithm will quickly"}, {"start": 104.48, "end": 110.24000000000001, "text": " understand how to adapt the structure of these small tasks to efficiently solve new problems."}, {"start": 110.24, "end": 115.44, "text": " The new algorithm generalizes really well for tasks with different lengths, topologies,"}, {"start": 115.44, "end": 119.75999999999999, "text": " and changing objectives. If you take a look at the paper, you'll also find some more"}, {"start": 119.75999999999999, "end": 124.72, "text": " information on adversarial dynamics, which lists some problem variants where a really"}, {"start": 124.72, "end": 130.07999999999998, "text": " unpleasant adversary pushes things around on the table from time to time to mess with the program,"}, {"start": 130.07999999999998, "end": 134.88, "text": " and there are some results that show that the algorithm is able to recover from these failure"}, {"start": 134.88, "end": 140.88, "text": " states quite well. Really cool. Now, please don't take this as a complete solution for AGI,"}, {"start": 140.88, "end": 145.28, "text": " because it is a fantastic piece of work, but it's definitely not that. However,"}, {"start": 145.28, "end": 149.2, "text": " it may be a valuable puzzle piece to build towards the final solution."}, {"start": 149.2, "end": 155.28, "text": " This is Research. We advance one step at a time. Man, what an amazing time to be alive."}, {"start": 155.28, "end": 168.48, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=p831XtyLA5M
AI Competitive Self-Play | Two Minute Papers #205
The paper "Emergent Complexity via Multi-Agent Competition" and its source code is available here: https://arxiv.org/abs/1710.03748 https://github.com/openai/multiagent-competition We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://flic.kr/p/5BaDVq Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karojona Ifeher. Earlier, we had an episode about OpenAI's absolutely amazing algorithm that mastered Dota 2, a competitive online battle arena game and managed to beat some of the best players in the world in a relatively limited one versus one game mode. While the full 5 versus 5 version of this learning algorithm is still in the works, scientists at OpenAI experimented with some self-play in other games and have found some remarkable results. You can see here that most of these amusing experiments take place in a made up 3D game with simulated physics. For instance, performing well with these humanoid creatures means controlling 17 actuated joints properly. These agents use a reinforcement learning algorithm to maximize a reward, for instance, a sumo warrior gets a thousand points for pushing their opponent out of the ring. The first interesting thing is that a learning curriculum was used, which means that the algorithm was allowed to explore on their own by relaxing the strict scores that are given only one winning. This is combined with the fact that these agents play against themselves led to some remarkable emergent behaviors. Here you can see with the score how much of a difference this curriculum makes. And you also see that whenever a plot is symmetric, that means that they are zero sum games. So if one agent wins a given number of points, the other loses the same amount. The self-play part is also particularly interesting, as many agents are being trained in parallel at the same time. And if we are talking about one versus one games, we have to create some useful logic to decide who to pair with whom. It seems that training against an older version of a previously challenged opponent was the best strategy. This makes sense because they are running a similar algorithm. And for self-play, this means that the algorithm is asked to defeat an older version of itself. If it can reliably do that, it will lead to a smooth and predictable learning process. It is kind of incredible to think about the fact that we have a virtual world with a bunch of simulated learning creatures, and we are omnipotent beings trying to craft the optimal learning experience for them. The perks of being a researcher in machine learning. And we are even being paid for this. Isn't this incredible? Ssh, don't tell anyone about this. There are so many interesting results here and so much to talk about. For instance, we haven't even talked about transfer learning, where these creatures learn to generalize their knowledge learned from previous tasks to tackle new challenges more efficiently. Make sure to have a look at the paper and the source code is available for everyone free of charge. If you're one of those fellow thinkers, you'll be more than happy to look into the video description. If you wish to hear more about transfer learning, subscribe and turn on notifications because the next episode is going to be about some really cool results in this area. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two minute papers with Karojona Ifeher."}, {"start": 4.28, "end": 11.0, "text": " Earlier, we had an episode about OpenAI's absolutely amazing algorithm that mastered Dota 2,"}, {"start": 11.0, "end": 16.8, "text": " a competitive online battle arena game and managed to beat some of the best players in the world"}, {"start": 16.8, "end": 19.96, "text": " in a relatively limited one versus one game mode."}, {"start": 19.96, "end": 24.52, "text": " While the full 5 versus 5 version of this learning algorithm is still in the works,"}, {"start": 24.52, "end": 31.6, "text": " scientists at OpenAI experimented with some self-play in other games and have found some remarkable results."}, {"start": 31.6, "end": 38.2, "text": " You can see here that most of these amusing experiments take place in a made up 3D game with simulated physics."}, {"start": 38.2, "end": 45.120000000000005, "text": " For instance, performing well with these humanoid creatures means controlling 17 actuated joints properly."}, {"start": 45.120000000000005, "end": 49.44, "text": " These agents use a reinforcement learning algorithm to maximize a reward,"}, {"start": 49.44, "end": 54.879999999999995, "text": " for instance, a sumo warrior gets a thousand points for pushing their opponent out of the ring."}, {"start": 54.879999999999995, "end": 61.16, "text": " The first interesting thing is that a learning curriculum was used, which means that the algorithm was allowed"}, {"start": 61.16, "end": 66.32, "text": " to explore on their own by relaxing the strict scores that are given only one winning."}, {"start": 66.32, "end": 73.0, "text": " This is combined with the fact that these agents play against themselves led to some remarkable emergent behaviors."}, {"start": 73.0, "end": 77.0, "text": " Here you can see with the score how much of a difference this curriculum makes."}, {"start": 77.0, "end": 82.04, "text": " And you also see that whenever a plot is symmetric, that means that they are zero sum games."}, {"start": 82.04, "end": 86.76, "text": " So if one agent wins a given number of points, the other loses the same amount."}, {"start": 86.76, "end": 93.72, "text": " The self-play part is also particularly interesting, as many agents are being trained in parallel at the same time."}, {"start": 93.72, "end": 100.68, "text": " And if we are talking about one versus one games, we have to create some useful logic to decide who to pair with whom."}, {"start": 100.68, "end": 106.48, "text": " It seems that training against an older version of a previously challenged opponent was the best strategy."}, {"start": 106.48, "end": 109.80000000000001, "text": " This makes sense because they are running a similar algorithm."}, {"start": 109.80000000000001, "end": 115.32000000000001, "text": " And for self-play, this means that the algorithm is asked to defeat an older version of itself."}, {"start": 115.32000000000001, "end": 120.52000000000001, "text": " If it can reliably do that, it will lead to a smooth and predictable learning process."}, {"start": 120.52000000000001, "end": 127.08000000000001, "text": " It is kind of incredible to think about the fact that we have a virtual world with a bunch of simulated learning creatures,"}, {"start": 127.08000000000001, "end": 132.4, "text": " and we are omnipotent beings trying to craft the optimal learning experience for them."}, {"start": 132.4, "end": 135.32, "text": " The perks of being a researcher in machine learning."}, {"start": 135.32, "end": 137.48, "text": " And we are even being paid for this."}, {"start": 137.48, "end": 138.92, "text": " Isn't this incredible?"}, {"start": 138.92, "end": 141.48, "text": " Ssh, don't tell anyone about this."}, {"start": 141.48, "end": 145.07999999999998, "text": " There are so many interesting results here and so much to talk about."}, {"start": 145.07999999999998, "end": 155.64, "text": " For instance, we haven't even talked about transfer learning, where these creatures learn to generalize their knowledge learned from previous tasks to tackle new challenges more efficiently."}, {"start": 155.64, "end": 160.72, "text": " Make sure to have a look at the paper and the source code is available for everyone free of charge."}, {"start": 160.72, "end": 165.44, "text": " If you're one of those fellow thinkers, you'll be more than happy to look into the video description."}, {"start": 165.44, "end": 174.44, "text": " If you wish to hear more about transfer learning, subscribe and turn on notifications because the next episode is going to be about some really cool results in this area."}, {"start": 174.44, "end": 196.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7wt-9fjPDjQ
Disney's AI Learns To Render Clouds | Two Minute Papers #204
The paper "Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks" is available here: http://drz.disneyresearch.com/~jnovak/publications/DeepScattering/ http://simon-kallweit.me/deepscattering/ https://tom94.net/data/publications/kallweit17deep/interactive-viewer/ Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2920167/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This is a fully-in-house Disney paper on how to teach a neural network to capture the appearance of clouds. This topic is one of my absolute favorites because it is in the intersection of the two topics I love most, computer graphics and machine learning. Hell yeah! Generally, we use light simulation programs to render these clouds and the difficult part of this is that we have to perform something that is called volumetric path tracing. This is a technique where we have to simulate rays of light that do not necessarily bounce off of the surface of objects that may penetrate their surfaces and undergo many scattering events. Understandably, in the case of clouds, capturing volumetric scattering properly is a key element in modeling their physical appearance. However, we have to simulate millions and millions of light pass with potentially hundreds of scattering events which is a computationally demanding task even in the age of rapidly improving hardware. As you can see here, the more we bump up the number of possible simulated scattering events, the closer we get to reality, but the longer it takes to render an image. In the case of bright clouds here, rendering an image like this can take up to 30 hours. In this work, a nice hybrid approach is proposed where a neural network learns the concept of in-scattered radiance and predicts it rapidly so this part we don't have to compute ourselves. It is a hybrid because some parts of the renderer are still using the traditional algorithms. The dataset used for training the neural network contains 75 different clouds, some of which are procedurally generated by a computer and some are drawn by artists to expose the learning algorithm to a large variety of cases. As a result, these images can be rendered in a matter of seconds to minutes. Normally, this would take many, many hours on a powerful computer. Here is another result with traditional path tracing. And now the same with deep scattering. Yep, that's how long it takes. The scattering parameters can also be interactively edited without us having to wait for hours to see if the new settings are better than the previous ones. Dialing in the perfect results typically takes an extremely lengthy trial and error phase which now can be done almost instantaneously. The technique also supports a variety of different scattering models. As with all results, they have to be compared to the ground truth renderings and as you can see here, they seem mostly indistinguishable from reality. It is also temporarily stable so animation rendering can take place flicker free as is demonstrated here in the video. I think this work is also a great testament to show how these incredible learning algorithms can accelerate progress in practically all fields of science. And given that this work was done by Disney, I am pretty sure we can expect tons of photorealistic clouds in their upcoming movies in the near future. There are tons of more details discussed in the paper which is remarkably well produced. Make sure to have a look, the link is in the video description. This is a proper, proper paper you don't want to miss out on this one. Also, if you enjoy this episode and you feel that the series provides you value in the form of enjoyment or learning, please consider supporting us on Patreon. You can pick up cool perks there like deciding the order of the next few episodes and you also help us make better videos in the future. Those are available in the description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 11.120000000000001, "text": " This is a fully-in-house Disney paper on how to teach a neural network to capture the appearance"}, {"start": 11.120000000000001, "end": 12.4, "text": " of clouds."}, {"start": 12.4, "end": 18.04, "text": " This topic is one of my absolute favorites because it is in the intersection of the two topics"}, {"start": 18.04, "end": 21.56, "text": " I love most, computer graphics and machine learning."}, {"start": 21.56, "end": 22.56, "text": " Hell yeah!"}, {"start": 22.56, "end": 27.560000000000002, "text": " Generally, we use light simulation programs to render these clouds and the difficult part"}, {"start": 27.56, "end": 33.04, "text": " of this is that we have to perform something that is called volumetric path tracing."}, {"start": 33.04, "end": 37.879999999999995, "text": " This is a technique where we have to simulate rays of light that do not necessarily bounce"}, {"start": 37.879999999999995, "end": 43.84, "text": " off of the surface of objects that may penetrate their surfaces and undergo many scattering"}, {"start": 43.84, "end": 44.84, "text": " events."}, {"start": 44.84, "end": 50.239999999999995, "text": " Understandably, in the case of clouds, capturing volumetric scattering properly is a key element"}, {"start": 50.239999999999995, "end": 52.16, "text": " in modeling their physical appearance."}, {"start": 52.16, "end": 57.519999999999996, "text": " However, we have to simulate millions and millions of light pass with potentially hundreds"}, {"start": 57.519999999999996, "end": 62.64, "text": " of scattering events which is a computationally demanding task even in the age of rapidly"}, {"start": 62.64, "end": 64.0, "text": " improving hardware."}, {"start": 64.0, "end": 68.08, "text": " As you can see here, the more we bump up the number of possible simulated scattering"}, {"start": 68.08, "end": 73.44, "text": " events, the closer we get to reality, but the longer it takes to render an image."}, {"start": 73.44, "end": 79.56, "text": " In the case of bright clouds here, rendering an image like this can take up to 30 hours."}, {"start": 79.56, "end": 84.72, "text": " In this work, a nice hybrid approach is proposed where a neural network learns the concept"}, {"start": 84.72, "end": 90.72, "text": " of in-scattered radiance and predicts it rapidly so this part we don't have to compute ourselves."}, {"start": 90.72, "end": 95.88, "text": " It is a hybrid because some parts of the renderer are still using the traditional algorithms."}, {"start": 95.88, "end": 101.68, "text": " The dataset used for training the neural network contains 75 different clouds, some of which"}, {"start": 101.68, "end": 107.32000000000001, "text": " are procedurally generated by a computer and some are drawn by artists to expose the"}, {"start": 107.32, "end": 110.52, "text": " learning algorithm to a large variety of cases."}, {"start": 110.52, "end": 115.19999999999999, "text": " As a result, these images can be rendered in a matter of seconds to minutes."}, {"start": 115.19999999999999, "end": 118.96, "text": " Normally, this would take many, many hours on a powerful computer."}, {"start": 118.96, "end": 128.79999999999998, "text": " Here is another result with traditional path tracing."}, {"start": 128.79999999999998, "end": 132.0, "text": " And now the same with deep scattering."}, {"start": 132.0, "end": 135.32, "text": " Yep, that's how long it takes."}, {"start": 135.32, "end": 140.68, "text": " The scattering parameters can also be interactively edited without us having to wait for hours"}, {"start": 140.68, "end": 144.64, "text": " to see if the new settings are better than the previous ones."}, {"start": 144.64, "end": 149.64, "text": " Dialing in the perfect results typically takes an extremely lengthy trial and error phase"}, {"start": 149.64, "end": 153.16, "text": " which now can be done almost instantaneously."}, {"start": 153.16, "end": 160.16, "text": " The technique also supports a variety of different scattering models."}, {"start": 160.16, "end": 164.6, "text": " As with all results, they have to be compared to the ground truth renderings and as you can"}, {"start": 164.6, "end": 169.2, "text": " see here, they seem mostly indistinguishable from reality."}, {"start": 169.2, "end": 174.32, "text": " It is also temporarily stable so animation rendering can take place flicker free as is"}, {"start": 174.32, "end": 176.44, "text": " demonstrated here in the video."}, {"start": 176.44, "end": 181.68, "text": " I think this work is also a great testament to show how these incredible learning algorithms"}, {"start": 181.68, "end": 185.84, "text": " can accelerate progress in practically all fields of science."}, {"start": 185.84, "end": 190.84, "text": " And given that this work was done by Disney, I am pretty sure we can expect tons of photorealistic"}, {"start": 190.84, "end": 193.95999999999998, "text": " clouds in their upcoming movies in the near future."}, {"start": 193.96, "end": 198.96, "text": " There are tons of more details discussed in the paper which is remarkably well produced."}, {"start": 198.96, "end": 201.76000000000002, "text": " Make sure to have a look, the link is in the video description."}, {"start": 201.76000000000002, "end": 205.96, "text": " This is a proper, proper paper you don't want to miss out on this one."}, {"start": 205.96, "end": 211.28, "text": " Also, if you enjoy this episode and you feel that the series provides you value in the form"}, {"start": 211.28, "end": 215.52, "text": " of enjoyment or learning, please consider supporting us on Patreon."}, {"start": 215.52, "end": 220.16, "text": " You can pick up cool perks there like deciding the order of the next few episodes and you"}, {"start": 220.16, "end": 223.28, "text": " also help us make better videos in the future."}, {"start": 223.28, "end": 225.04, "text": " Those are available in the description."}, {"start": 225.04, "end": 245.16, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dqxqbvyOnMY
Video Game Graphics To Reality And Back | Two Minute Papers #203
The paper "Unsupervised Image-to-Image Translation Networks" and its source code is available here: https://arxiv.org/pdf/1703.00848.pdf https://github.com/mingyuliutw/UNIT We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://flic.kr/p/UASi2i Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehair. Researchers at MVIDIA have been hitting it out of the park lately, and this work is no different. This technique performs image translation, which means that the input is an image of our choice, and the output is the same image with somewhat different semantics. For instance, an image of a city can be translated to a map of this city, or a daytime photo or video can be converted to a pier as if it were shot during the night. And throughout the video, you'll see so much more of these exciting applications. A typical way of accomplishing this is done by using generative adversarial networks. This is a pair of neural networks where the generator network creates new synthetic images trying to fool a discriminator network which learns to tell a fake synthesized image from a real one. These two neural networks learn together where one tries to come up with better solutions to fool the discriminator, where the discriminator seeks to get better at telling forgeries from the real photographs. In the end, this rivalry makes both of them get better and better, and the final result is an excellent technique to create convincing image translations. In this work, not two, but six of these networks are being used, so make sure to have a look at the paper for details. There was an earlier work that was able to perform image translation by leaning on a novel cycle consistency constraint. This means that we assume that the source image can be translated to the target image, and then this target image can be translated back to look exactly like the source. This kind of means that these translations are not arbitrary and are mathematically meaningful operations. Here, the new technique builds on a novel assumption that there exists a latent space in which the input and the output images can both coexist. This latent space is basically an intuitive and concise representation of some more complicated data. For instance, earlier, we experimented a bit with fonts and had seen that even though the theory of font design is not easy, we can create a two-dimensional latent space that encodes simple properties like curvature that can describe many, many fonts in an intuitive manner. Remarkably, with this new work, converting dogs and cats into different breeds is also a possibility. Interestingly, it can also perform real to synthetic image translation and vice versa. So that means that it can create video game footage from our real world videos and even more remarkably, convert video game footage to real world video. This is insanity, one of the craziest ideas I've seen in a while. Bravo! And now, hold on to your papers because it can also perform attribute-based image translation. This means that for instance, we can grab an image of a human face and transform the model's hair to blonde, add sunglasses or smiles to it at will. A limitation of this technique is that training is still non-trivial as it still relies on generative adversarial networks, and it is not yet clear whether there is a point to which the training converges or not. The source code of this project is also available. Make sure to take a good look at the license before doing anything because it is under the Creative Commons non-commercial and no-derivatives license. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehair."}, {"start": 4.28, "end": 10.0, "text": " Researchers at MVIDIA have been hitting it out of the park lately, and this work is no different."}, {"start": 10.0, "end": 15.5, "text": " This technique performs image translation, which means that the input is an image of our choice,"}, {"start": 15.5, "end": 19.5, "text": " and the output is the same image with somewhat different semantics."}, {"start": 19.5, "end": 24.1, "text": " For instance, an image of a city can be translated to a map of this city,"}, {"start": 24.1, "end": 30.1, "text": " or a daytime photo or video can be converted to a pier as if it were shot during the night."}, {"start": 30.1, "end": 34.6, "text": " And throughout the video, you'll see so much more of these exciting applications."}, {"start": 34.6, "end": 39.6, "text": " A typical way of accomplishing this is done by using generative adversarial networks."}, {"start": 39.6, "end": 45.6, "text": " This is a pair of neural networks where the generator network creates new synthetic images"}, {"start": 45.6, "end": 52.6, "text": " trying to fool a discriminator network which learns to tell a fake synthesized image from a real one."}, {"start": 52.6, "end": 59.1, "text": " These two neural networks learn together where one tries to come up with better solutions to fool the discriminator,"}, {"start": 59.1, "end": 64.6, "text": " where the discriminator seeks to get better at telling forgeries from the real photographs."}, {"start": 64.6, "end": 68.6, "text": " In the end, this rivalry makes both of them get better and better,"}, {"start": 68.6, "end": 73.6, "text": " and the final result is an excellent technique to create convincing image translations."}, {"start": 73.6, "end": 79.6, "text": " In this work, not two, but six of these networks are being used, so make sure to have a look at the paper for details."}, {"start": 79.6, "end": 87.1, "text": " There was an earlier work that was able to perform image translation by leaning on a novel cycle consistency constraint."}, {"start": 87.1, "end": 92.1, "text": " This means that we assume that the source image can be translated to the target image,"}, {"start": 92.1, "end": 97.6, "text": " and then this target image can be translated back to look exactly like the source."}, {"start": 97.6, "end": 103.6, "text": " This kind of means that these translations are not arbitrary and are mathematically meaningful operations."}, {"start": 103.6, "end": 112.6, "text": " Here, the new technique builds on a novel assumption that there exists a latent space in which the input and the output images can both coexist."}, {"start": 112.6, "end": 119.6, "text": " This latent space is basically an intuitive and concise representation of some more complicated data."}, {"start": 119.6, "end": 126.6, "text": " For instance, earlier, we experimented a bit with fonts and had seen that even though the theory of font design is not easy,"}, {"start": 126.6, "end": 135.6, "text": " we can create a two-dimensional latent space that encodes simple properties like curvature that can describe many, many fonts in an intuitive manner."}, {"start": 135.6, "end": 141.6, "text": " Remarkably, with this new work, converting dogs and cats into different breeds is also a possibility."}, {"start": 147.6, "end": 153.6, "text": " Interestingly, it can also perform real to synthetic image translation and vice versa."}, {"start": 153.6, "end": 160.6, "text": " So that means that it can create video game footage from our real world videos and even more remarkably,"}, {"start": 160.6, "end": 163.6, "text": " convert video game footage to real world video."}, {"start": 163.6, "end": 167.6, "text": " This is insanity, one of the craziest ideas I've seen in a while."}, {"start": 167.6, "end": 174.6, "text": " Bravo! And now, hold on to your papers because it can also perform attribute-based image translation."}, {"start": 174.6, "end": 180.6, "text": " This means that for instance, we can grab an image of a human face and transform the model's hair to blonde,"}, {"start": 180.6, "end": 183.6, "text": " add sunglasses or smiles to it at will."}, {"start": 183.6, "end": 191.6, "text": " A limitation of this technique is that training is still non-trivial as it still relies on generative adversarial networks,"}, {"start": 191.6, "end": 196.6, "text": " and it is not yet clear whether there is a point to which the training converges or not."}, {"start": 196.6, "end": 199.6, "text": " The source code of this project is also available."}, {"start": 199.6, "end": 207.6, "text": " Make sure to take a good look at the license before doing anything because it is under the Creative Commons non-commercial and no-derivatives license."}, {"start": 207.6, "end": 211.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mmeoUZ_wRm4
Transferring AI To The Real World (OpenAI) | Two Minute Papers #202
The paper "Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World" is available here: https://arxiv.org/pdf/1703.06907.pdf We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2874016/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. In this series, we talk a lot about different AI algorithms that solve a variety of super-difficult tasks. These are typically tested within a software environment in the form of a simulation program. However, this often leaves the question open whether these algorithms would really work in real-world environments. So what about that? This work from OpenAI goes by the name Domain Randomization and is about training an AI on relatively crude computer simulations in a way that can be transferred to the real world. The problem used to demonstrate this was localizing and grasping objects. Note that this algorithm has never seen any real images and was trained using simulated data. It only played a computer game, if you will. Now, the question we immediately think about is what the term Domain Randomization has to do with transferring simulation knowledge into reality. The key observation is that using simulated training data is okay, but we have to make sure that the AI is exposed to a diverse enough set of circumstances to obtain knowledge that generalizes properly, hence the term Domain Randomization. In these experiments, the following parameters were heavily randomized. Number of shapes and distractor objects on the table, positions and textures on the objects, table and the environment, number of lights, material properties, and the algorithm was even exposed to some random noise as well in the images. And it turns out that if we do this properly, leaning on the knowledge of only a few thousand images, when the algorithm is uploaded to a real robot arm, it becomes capable of grasping the correct prescribed objects. In this case, the objective was spam detection. Very amusing. I think the very interesting part is that it is not even using photorealistic rendering and light simulations. These programs are able to create high quality images that resemble the real world around us, and it is mostly clear that those would be useful to train such an algorithm. However, this only uses extremely crude data and the knowledge of the AI still generalizes to the real world. How about that? Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.46, "end": 10.46, "text": " In this series, we talk a lot about different AI algorithms that solve a variety of super-difficult"}, {"start": 10.46, "end": 11.46, "text": " tasks."}, {"start": 11.46, "end": 15.94, "text": " These are typically tested within a software environment in the form of a simulation"}, {"start": 15.94, "end": 16.94, "text": " program."}, {"start": 16.94, "end": 21.38, "text": " However, this often leaves the question open whether these algorithms would really work"}, {"start": 21.38, "end": 23.14, "text": " in real-world environments."}, {"start": 23.14, "end": 24.46, "text": " So what about that?"}, {"start": 24.46, "end": 29.98, "text": " This work from OpenAI goes by the name Domain Randomization and is about training an"}, {"start": 29.98, "end": 35.62, "text": " AI on relatively crude computer simulations in a way that can be transferred to the real"}, {"start": 35.62, "end": 36.620000000000005, "text": " world."}, {"start": 36.620000000000005, "end": 41.18, "text": " The problem used to demonstrate this was localizing and grasping objects."}, {"start": 41.18, "end": 46.66, "text": " Note that this algorithm has never seen any real images and was trained using simulated"}, {"start": 46.66, "end": 47.66, "text": " data."}, {"start": 47.66, "end": 49.94, "text": " It only played a computer game, if you will."}, {"start": 49.94, "end": 54.900000000000006, "text": " Now, the question we immediately think about is what the term Domain Randomization"}, {"start": 54.900000000000006, "end": 59.06, "text": " has to do with transferring simulation knowledge into reality."}, {"start": 59.06, "end": 64.18, "text": " The key observation is that using simulated training data is okay, but we have to make"}, {"start": 64.18, "end": 70.02000000000001, "text": " sure that the AI is exposed to a diverse enough set of circumstances to obtain knowledge"}, {"start": 70.02000000000001, "end": 74.22, "text": " that generalizes properly, hence the term Domain Randomization."}, {"start": 74.22, "end": 78.22, "text": " In these experiments, the following parameters were heavily randomized."}, {"start": 78.22, "end": 83.38, "text": " Number of shapes and distractor objects on the table, positions and textures on the objects,"}, {"start": 83.38, "end": 88.74000000000001, "text": " table and the environment, number of lights, material properties, and the algorithm was"}, {"start": 88.74, "end": 92.33999999999999, "text": " even exposed to some random noise as well in the images."}, {"start": 92.33999999999999, "end": 97.33999999999999, "text": " And it turns out that if we do this properly, leaning on the knowledge of only a few thousand"}, {"start": 97.33999999999999, "end": 102.89999999999999, "text": " images, when the algorithm is uploaded to a real robot arm, it becomes capable of grasping"}, {"start": 102.89999999999999, "end": 104.97999999999999, "text": " the correct prescribed objects."}, {"start": 104.97999999999999, "end": 108.53999999999999, "text": " In this case, the objective was spam detection."}, {"start": 108.53999999999999, "end": 109.53999999999999, "text": " Very amusing."}, {"start": 109.53999999999999, "end": 114.61999999999999, "text": " I think the very interesting part is that it is not even using photorealistic rendering"}, {"start": 114.61999999999999, "end": 116.14, "text": " and light simulations."}, {"start": 116.14, "end": 121.1, "text": " These programs are able to create high quality images that resemble the real world around"}, {"start": 121.1, "end": 126.02, "text": " us, and it is mostly clear that those would be useful to train such an algorithm."}, {"start": 126.02, "end": 132.1, "text": " However, this only uses extremely crude data and the knowledge of the AI still generalizes"}, {"start": 132.1, "end": 133.42000000000002, "text": " to the real world."}, {"start": 133.42000000000002, "end": 134.74, "text": " How about that?"}, {"start": 134.74, "end": 154.54000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9xlSy9F5WtE
New DeepMind AI Beats AlphaGo 100-0 | Two Minute Papers #201
The AlphaGo Zero paper "Mastering the Game of Go without Human Knowledge" is available here: https://deepmind.com/blog/alphago-zero-learning-scratch/ https://deepmind.com/documents/119/agz_unformatted_nature.pdf Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Photo credits: Watson - AP Photo/Jeopardy Productions, Inc. Fan Hui match photo - Google DeepMind - https://www.youtube.com/watch?v=SUbqy... Go board image credits (all CC BY 2.0): Renato Ganoza - https://flic.kr/p/7nX4kK Jaro Larnos (changes were applied, mostly recoloring) - https://flic.kr/p/dDeQU9 Luis de Bethencourt - https://flic.kr/p/4c5RaR Go ratings: https://www.goratings.org/en/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://flic.kr/p/skJBM1 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejolnai-Fehir. Hold on to your papers because this work on AlphaGo is absolute insanity. In the game of Go, the players put stones on the table where the objective is to surround more territory than the opponent. This is a beautiful game that is particularly interesting for AI research because the space of possible moves is vastly larger than in chess, which means that using any sort of exhaustive search is out of question, and we have to resort to smart algorithms that are able to identify a small number of strong moves within this dependently large search space. The first incarnation of DeepMind's Go AI AlphaGo uses a combination of a policy network that is responsible for predicting the moves and the value network that predicts the winner of the game after it plays it to the end against itself. These are both deep neural networks and they are then combined with a technique called Monte Carlo 3 Search to be able to narrow down the search in this large search space. This algorithm started out with a bootstrapping process where it was shown thousands of games that were used to learn the basics of Go. Based on this, it is clear that such an algorithm can learn to be as good as formidable human players. But the big question was, how could it possibly become even better than the professionals that it has observed? How could the disciple become better than its master? The solution is that after it has learned what it can from these games, it plays against itself many many times to improve its skills. This second phase is the main part of the training that takes the most time. Let's call this base algorithm AlphaGoFan, which was used to play a guessfant way, a two-dan European Go champion who was defeated 5-0. This was a historic moment and the first time an AI beat a professional Go player without a handicap. Funway described his experience as playing a guest a very strong and stable player and he also mentioned that the algorithm felt very human like. Some voiced their doubts within the Go community and noted that the algorithm would never be able to beat Lisa Dahl and 9-dan World Champion and winner of 18 international titles. Also give you an intuition of the difference. Based on their illow points, Lisa Dahl is expected to beat Funway 97 times out of 100 games. So a few months later, DeepMind organized a huge media event where they would challenge him to play against AlphaGo. This was a slightly modified version of the base algorithm that used a deeper neural network with more layers and was trained using more resources than the previous version. There was also an algorithmic change to the policy networks, the details on this are available in the paper in the description. It is a great read. Make sure to have a look. Let's call this algorithm AlphaGo Lee. This event was watched all around the world and can perhaps be compared to the cuspere of public chess games against DeepBlue. I have the fondest memories of waking up super early in the morning jumping out of the bed in excitement to watch all these Go matches. And in a long and nail biting series, Lisa Dahl was defeated 4-1 by the AI. With significantly less media attention, the next phase came, bearing the name AlphaGo Master, which used around 10 times less tensor processing units than the AlphaGo Lee and became an even stronger player. This algorithm played against human professionals online in January 2017 and won all 60 matches it had played. This is insanity. But if you think that's it, well, hold on to your papers now. In their newest work, AlphaGo has reached its next form AlphaGo Zero. This variant does not have access to any human played games in the first phase and learns completely through self play. It starts out from absolutely nothing with just the knowledge of the rules of the game. It was trained for 40 days and by day 3, it reached the level of AlphaGo Lee this is above world champion level. Around day 29, it hits the level of AlphaGo Master, which is practically unbeatable to all human beings. And get this, at 40 days, this version surpasses all previous AlphaGo versions and defeats the previously published WorldBedar version 100-0. This has kept me up for several nights now and I am completely out of words. In this version, the two neural networks are fused into one which can be trained more efficiently. It is beautiful to see these curves as they show this neural network starting from a random initialization. It knows the rules but beyond that it is completely clueless about the game itself and this rapidly becomes practically unbeatable. And I left the best part for last. It uses only one single machine. I think it is fair to say that this is history unfolding before our eyes. What a time to be alive. Congratulations to the DeepMine team for this remarkable achievement. And for me, I love talking about research to a wider audience and it is a true privilege to be able to tell these stories to you. Thank you very much for your generous support on Patreon and making me able to spend more and more time with what I love most. Absolutely amazing. And now I know it's a bit redundant but from muscle memory, I'll sign out to usual way.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejolnai-Fehir."}, {"start": 4.76, "end": 9.92, "text": " Hold on to your papers because this work on AlphaGo is absolute insanity."}, {"start": 9.92, "end": 14.8, "text": " In the game of Go, the players put stones on the table where the objective is to surround"}, {"start": 14.8, "end": 16.96, "text": " more territory than the opponent."}, {"start": 16.96, "end": 22.28, "text": " This is a beautiful game that is particularly interesting for AI research because the space"}, {"start": 22.28, "end": 28.16, "text": " of possible moves is vastly larger than in chess, which means that using any sort of exhaustive"}, {"start": 28.16, "end": 34.4, "text": " search is out of question, and we have to resort to smart algorithms that are able to identify"}, {"start": 34.4, "end": 39.480000000000004, "text": " a small number of strong moves within this dependently large search space."}, {"start": 39.480000000000004, "end": 45.8, "text": " The first incarnation of DeepMind's Go AI AlphaGo uses a combination of a policy network"}, {"start": 45.8, "end": 50.64, "text": " that is responsible for predicting the moves and the value network that predicts the winner"}, {"start": 50.64, "end": 54.32, "text": " of the game after it plays it to the end against itself."}, {"start": 54.32, "end": 58.64, "text": " These are both deep neural networks and they are then combined with a technique called"}, {"start": 58.64, "end": 64.2, "text": " Monte Carlo 3 Search to be able to narrow down the search in this large search space."}, {"start": 64.2, "end": 69.36, "text": " This algorithm started out with a bootstrapping process where it was shown thousands of games"}, {"start": 69.36, "end": 72.36, "text": " that were used to learn the basics of Go."}, {"start": 72.36, "end": 77.8, "text": " Based on this, it is clear that such an algorithm can learn to be as good as formidable human"}, {"start": 77.8, "end": 78.8, "text": " players."}, {"start": 78.8, "end": 83.76, "text": " But the big question was, how could it possibly become even better than the professionals"}, {"start": 83.76, "end": 85.16000000000001, "text": " that it has observed?"}, {"start": 85.16000000000001, "end": 88.64, "text": " How could the disciple become better than its master?"}, {"start": 88.64, "end": 93.44, "text": " The solution is that after it has learned what it can from these games, it plays against"}, {"start": 93.44, "end": 96.64, "text": " itself many many times to improve its skills."}, {"start": 96.64, "end": 100.76, "text": " This second phase is the main part of the training that takes the most time."}, {"start": 100.76, "end": 105.96000000000001, "text": " Let's call this base algorithm AlphaGoFan, which was used to play a guessfant way, a"}, {"start": 105.96000000000001, "end": 110.52000000000001, "text": " two-dan European Go champion who was defeated 5-0."}, {"start": 110.52, "end": 116.24, "text": " This was a historic moment and the first time an AI beat a professional Go player without"}, {"start": 116.24, "end": 117.56, "text": " a handicap."}, {"start": 117.56, "end": 122.84, "text": " Funway described his experience as playing a guest a very strong and stable player and"}, {"start": 122.84, "end": 126.6, "text": " he also mentioned that the algorithm felt very human like."}, {"start": 126.6, "end": 131.44, "text": " Some voiced their doubts within the Go community and noted that the algorithm would never be"}, {"start": 131.44, "end": 138.35999999999999, "text": " able to beat Lisa Dahl and 9-dan World Champion and winner of 18 international titles."}, {"start": 138.36, "end": 140.72000000000003, "text": " Also give you an intuition of the difference."}, {"start": 140.72000000000003, "end": 148.16000000000003, "text": " Based on their illow points, Lisa Dahl is expected to beat Funway 97 times out of 100 games."}, {"start": 148.16000000000003, "end": 153.0, "text": " So a few months later, DeepMind organized a huge media event where they would challenge"}, {"start": 153.0, "end": 155.08, "text": " him to play against AlphaGo."}, {"start": 155.08, "end": 160.08, "text": " This was a slightly modified version of the base algorithm that used a deeper neural network"}, {"start": 160.08, "end": 164.92000000000002, "text": " with more layers and was trained using more resources than the previous version."}, {"start": 164.92, "end": 168.88, "text": " There was also an algorithmic change to the policy networks, the details on this are"}, {"start": 168.88, "end": 171.2, "text": " available in the paper in the description."}, {"start": 171.2, "end": 172.88, "text": " It is a great read."}, {"start": 172.88, "end": 174.04, "text": " Make sure to have a look."}, {"start": 174.04, "end": 176.76, "text": " Let's call this algorithm AlphaGo Lee."}, {"start": 176.76, "end": 181.27999999999997, "text": " This event was watched all around the world and can perhaps be compared to the cuspere"}, {"start": 181.27999999999997, "end": 184.04, "text": " of public chess games against DeepBlue."}, {"start": 184.04, "end": 188.92, "text": " I have the fondest memories of waking up super early in the morning jumping out of the"}, {"start": 188.92, "end": 192.07999999999998, "text": " bed in excitement to watch all these Go matches."}, {"start": 192.08, "end": 198.24, "text": " And in a long and nail biting series, Lisa Dahl was defeated 4-1 by the AI."}, {"start": 198.24, "end": 203.28, "text": " With significantly less media attention, the next phase came, bearing the name AlphaGo"}, {"start": 203.28, "end": 209.28, "text": " Master, which used around 10 times less tensor processing units than the AlphaGo Lee and"}, {"start": 209.28, "end": 211.96, "text": " became an even stronger player."}, {"start": 211.96, "end": 218.92000000000002, "text": " This algorithm played against human professionals online in January 2017 and won all 60 matches"}, {"start": 218.92000000000002, "end": 220.08, "text": " it had played."}, {"start": 220.08, "end": 221.64000000000001, "text": " This is insanity."}, {"start": 221.64, "end": 225.07999999999998, "text": " But if you think that's it, well, hold on to your papers now."}, {"start": 225.07999999999998, "end": 230.11999999999998, "text": " In their newest work, AlphaGo has reached its next form AlphaGo Zero."}, {"start": 230.11999999999998, "end": 235.07999999999998, "text": " This variant does not have access to any human played games in the first phase and learns"}, {"start": 235.07999999999998, "end": 237.32, "text": " completely through self play."}, {"start": 237.32, "end": 242.2, "text": " It starts out from absolutely nothing with just the knowledge of the rules of the game."}, {"start": 242.2, "end": 249.72, "text": " It was trained for 40 days and by day 3, it reached the level of AlphaGo Lee this is above"}, {"start": 249.72, "end": 251.48, "text": " world champion level."}, {"start": 251.48, "end": 257.56, "text": " Around day 29, it hits the level of AlphaGo Master, which is practically unbeatable to all"}, {"start": 257.56, "end": 258.96, "text": " human beings."}, {"start": 258.96, "end": 265.24, "text": " And get this, at 40 days, this version surpasses all previous AlphaGo versions and defeats"}, {"start": 265.24, "end": 269.76, "text": " the previously published WorldBedar version 100-0."}, {"start": 269.76, "end": 274.44, "text": " This has kept me up for several nights now and I am completely out of words."}, {"start": 274.44, "end": 279.0, "text": " In this version, the two neural networks are fused into one which can be trained more"}, {"start": 279.0, "end": 280.0, "text": " efficiently."}, {"start": 280.0, "end": 284.76, "text": " It is beautiful to see these curves as they show this neural network starting from a random"}, {"start": 284.76, "end": 286.08, "text": " initialization."}, {"start": 286.08, "end": 290.92, "text": " It knows the rules but beyond that it is completely clueless about the game itself and"}, {"start": 290.92, "end": 294.12, "text": " this rapidly becomes practically unbeatable."}, {"start": 294.12, "end": 296.36, "text": " And I left the best part for last."}, {"start": 296.36, "end": 299.48, "text": " It uses only one single machine."}, {"start": 299.48, "end": 304.04, "text": " I think it is fair to say that this is history unfolding before our eyes."}, {"start": 304.04, "end": 306.08, "text": " What a time to be alive."}, {"start": 306.08, "end": 309.32, "text": " Congratulations to the DeepMine team for this remarkable achievement."}, {"start": 309.32, "end": 314.76, "text": " And for me, I love talking about research to a wider audience and it is a true privilege"}, {"start": 314.76, "end": 316.84, "text": " to be able to tell these stories to you."}, {"start": 316.84, "end": 321.15999999999997, "text": " Thank you very much for your generous support on Patreon and making me able to spend more"}, {"start": 321.15999999999997, "end": 323.56, "text": " and more time with what I love most."}, {"start": 323.56, "end": 325.12, "text": " Absolutely amazing."}, {"start": 325.12, "end": 329.64, "text": " And now I know it's a bit redundant but from muscle memory, I'll sign out to usual"}, {"start": 329.64, "end": 343.08, "text": " way."}]
Two Minute Papers
https://www.youtube.com/watch?v=mECv52eSjBo
Real-Time Global Illumination With Radiance Probes | Two Minute Papers #200
The paper "Real-time Global Illumination by Precomputed Local Reconstruction from Sparse Radiance Probes" is available here: https://arisilvennoinen.github.io/Publications/Real-time_Global_Illumination_by_Precomputed_Local_Reconstruction_from_Sparse_Radiance_Probes.pdf We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Steve Messina, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. This is our 200th episode, so I know you're expecting something great. See how you like this one. One of the most sought-after effects in light transport simulations is capturing indirect illumination. This is a beautiful effect where the color of multiple diffuse, matte surfaces bleed onto each other. And of course, computing such an effect is as costly as it is beautiful, because it requires following the path of millions and millions of light rays. This usually means several hours of waiting time. There have been countless research papers written on how to do this in real time, but the limitations were often much too crippling for practical use. But this time around, you will see soon that these results are just outstanding. And we will have a word on limitations at the end of this video. The key contribution of this work is that instead of computing the light transport between all possible point pairs in the scene, it uses radiance probes that measure the nearby illumination and tries to reconstruct the missing information from this sparse set of radiance probes. After that, we place a bunch of receiver points around the scene to places where we would like to know how the indirect illumination looks. There are several things to be taken care of in the implementation of this idea. For instance, in previous works, the hierarchy of the sender and receiver points was typically fixed. In this new work, it is shown that the much sparser set of carefully placed radiance probes is sufficient to create high-quality reconstructions. This seemingly small difference also gives rise to a lot of ambiguous cases that the researchers needed to work out how to deal with. For instance, possible occlusions between the probes and receiver points need special care. The entire algorithm is explained in a remarkably intuitive way in the paper, make sure to have a look at that. And, given that we can create images by performing much less computation with this technique, we can perform real-time light simulations. As you can see, 3.9 milliseconds is a typical value for computing an entire image, which means that this can be done with over 250 frames per second. That's not only real-time, that's several times real-time, if you will. Outstanding. And of course, now that we know that this technique is fast, the next question is how accurate is it. As expected, the outputs are always compared to the reference footage, so we can see how accurate the proposed technique is. Clearly, there are differences. However, probably many of us would fail to notice that we are not looking at the reference footage, especially if we don't have access to it, which is the case in most applications. And note that normally, we would have to wait for hours for results like this. Isn't this incredible? There are also tons of more comparisons in the paper. For instance, it is also shown how the density of radiance probes relates to the output quality, and where the possible sweet spots are for industry practitioners. It is also tested against many competing solutions. Not only the results, but the number and quality of comparisons is also top tier in this paper. However, like with all research works, no new idea comes without limitations. This method works extremely well for static scenes where not a lot of objects move around. Some movement is still fine as it is shown in the video here, but drastic changes to the structure of the scene, like a large opening door that remains unaccounted for by the probes will lead to dips in the quality of the reconstruction. I think this is an excellent direction for future research works. If you enjoyed this episode, make sure to subscribe and click the bell icon. We have some more amazing papers coming up. You don't want to miss that. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.32, "end": 9.040000000000001, "text": " This is our 200th episode, so I know you're expecting something great."}, {"start": 9.040000000000001, "end": 10.48, "text": " See how you like this one."}, {"start": 10.48, "end": 16.48, "text": " One of the most sought-after effects in light transport simulations is capturing indirect illumination."}, {"start": 16.48, "end": 20.400000000000002, "text": " This is a beautiful effect where the color of multiple diffuse,"}, {"start": 20.400000000000002, "end": 22.96, "text": " matte surfaces bleed onto each other."}, {"start": 22.96, "end": 27.36, "text": " And of course, computing such an effect is as costly as it is beautiful,"}, {"start": 27.36, "end": 31.92, "text": " because it requires following the path of millions and millions of light rays."}, {"start": 31.92, "end": 35.12, "text": " This usually means several hours of waiting time."}, {"start": 35.12, "end": 39.28, "text": " There have been countless research papers written on how to do this in real time,"}, {"start": 39.28, "end": 43.12, "text": " but the limitations were often much too crippling for practical use."}, {"start": 43.12, "end": 47.92, "text": " But this time around, you will see soon that these results are just outstanding."}, {"start": 47.92, "end": 51.36, "text": " And we will have a word on limitations at the end of this video."}, {"start": 51.36, "end": 55.6, "text": " The key contribution of this work is that instead of computing the light transport"}, {"start": 55.6, "end": 60.800000000000004, "text": " between all possible point pairs in the scene, it uses radiance probes"}, {"start": 60.800000000000004, "end": 65.76, "text": " that measure the nearby illumination and tries to reconstruct the missing information"}, {"start": 65.76, "end": 68.4, "text": " from this sparse set of radiance probes."}, {"start": 68.4, "end": 72.56, "text": " After that, we place a bunch of receiver points around the scene to places"}, {"start": 72.56, "end": 75.84, "text": " where we would like to know how the indirect illumination looks."}, {"start": 75.84, "end": 80.24000000000001, "text": " There are several things to be taken care of in the implementation of this idea."}, {"start": 80.24000000000001, "end": 85.04, "text": " For instance, in previous works, the hierarchy of the sender and receiver points"}, {"start": 85.04, "end": 86.56, "text": " was typically fixed."}, {"start": 86.56, "end": 92.16000000000001, "text": " In this new work, it is shown that the much sparser set of carefully placed radiance probes"}, {"start": 92.16000000000001, "end": 95.36000000000001, "text": " is sufficient to create high-quality reconstructions."}, {"start": 95.36000000000001, "end": 100.16000000000001, "text": " This seemingly small difference also gives rise to a lot of ambiguous cases"}, {"start": 100.16000000000001, "end": 103.36000000000001, "text": " that the researchers needed to work out how to deal with."}, {"start": 103.36000000000001, "end": 108.24000000000001, "text": " For instance, possible occlusions between the probes and receiver points need special care."}, {"start": 108.80000000000001, "end": 113.2, "text": " The entire algorithm is explained in a remarkably intuitive way in the paper,"}, {"start": 113.2, "end": 114.8, "text": " make sure to have a look at that."}, {"start": 114.8, "end": 119.04, "text": " And, given that we can create images by performing much less computation"}, {"start": 119.04, "end": 123.2, "text": " with this technique, we can perform real-time light simulations."}, {"start": 123.2, "end": 129.12, "text": " As you can see, 3.9 milliseconds is a typical value for computing an entire image,"}, {"start": 129.12, "end": 134.16, "text": " which means that this can be done with over 250 frames per second."}, {"start": 134.16, "end": 138.96, "text": " That's not only real-time, that's several times real-time, if you will."}, {"start": 138.96, "end": 140.08, "text": " Outstanding."}, {"start": 140.08, "end": 144.48000000000002, "text": " And of course, now that we know that this technique is fast, the next question is"}, {"start": 144.48000000000002, "end": 146.08, "text": " how accurate is it."}, {"start": 146.08, "end": 150.08, "text": " As expected, the outputs are always compared to the reference footage,"}, {"start": 150.08, "end": 153.20000000000002, "text": " so we can see how accurate the proposed technique is."}, {"start": 153.20000000000002, "end": 155.04000000000002, "text": " Clearly, there are differences."}, {"start": 155.04000000000002, "end": 159.68, "text": " However, probably many of us would fail to notice that we are not looking at the reference"}, {"start": 159.68, "end": 164.8, "text": " footage, especially if we don't have access to it, which is the case in most applications."}, {"start": 164.8, "end": 169.44, "text": " And note that normally, we would have to wait for hours for results like this."}, {"start": 169.44, "end": 171.12, "text": " Isn't this incredible?"}, {"start": 171.12, "end": 174.32, "text": " There are also tons of more comparisons in the paper."}, {"start": 174.32, "end": 178.24, "text": " For instance, it is also shown how the density of radiance probes"}, {"start": 178.24, "end": 184.16, "text": " relates to the output quality, and where the possible sweet spots are for industry practitioners."}, {"start": 184.16, "end": 187.2, "text": " It is also tested against many competing solutions."}, {"start": 187.2, "end": 193.92, "text": " Not only the results, but the number and quality of comparisons is also top tier in this paper."}, {"start": 193.92, "end": 198.96, "text": " However, like with all research works, no new idea comes without limitations."}, {"start": 198.96, "end": 204.24, "text": " This method works extremely well for static scenes where not a lot of objects move around."}, {"start": 204.24, "end": 207.52, "text": " Some movement is still fine as it is shown in the video here,"}, {"start": 207.52, "end": 210.08, "text": " but drastic changes to the structure of the scene,"}, {"start": 210.08, "end": 214.48000000000002, "text": " like a large opening door that remains unaccounted for by the probes"}, {"start": 214.48000000000002, "end": 217.44, "text": " will lead to dips in the quality of the reconstruction."}, {"start": 217.44, "end": 221.04000000000002, "text": " I think this is an excellent direction for future research works."}, {"start": 221.04000000000002, "end": 225.04000000000002, "text": " If you enjoyed this episode, make sure to subscribe and click the bell icon."}, {"start": 225.04000000000002, "end": 227.52, "text": " We have some more amazing papers coming up."}, {"start": 227.52, "end": 228.88000000000002, "text": " You don't want to miss that."}, {"start": 228.88000000000002, "end": 231.20000000000002, "text": " Thanks for watching and for your generous support."}, {"start": 231.2, "end": 257.91999999999996, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=kfJMUeQO0S0
Learning to Model Other Minds (OpenAI) | Two Minute Papers #199
The paper "Learning with Opponent-Learning Awareness" is available here: https://arxiv.org/abs/1709.04326 Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers Showcased episodes: Real-Time Oil Painting on Mobile - https://www.youtube.com/watch?v=1SHW1-qKKpY Real-Time Modeling and Animation of Climbing Plants - https://www.youtube.com/watch?v=aAsejHZC5EE We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2629752/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This work doesn't have a ton of viewable footage, but I think it is an absolutely amazing piece of craftsmanship. So in the first half of this video, we'll roll some footage from earlier episodes, and in the second half, you'll see the new stuff. In this series, we often talk about reinforcement learning, which is a learning technique where an agent chooses an optimal series of actions in an environment to maximize the score. Learning computer games is a good example of a clearly defined score that is to be maximized. As long as we can say that the higher the score, the better the learning. The concept will work for helicopter control choosing the best spot for Wi-Fi connectivity or a large variety of different tasks. However, what about environments where multiple agents or players are present? Not all games are single-player focused, and not all helicopters have to fly alone. So what about that? To deal with cases like this, scientists at OpenAI and the University of Oxford came up with a work by the name Learning with Opponent Learning Awareness, L-O-L-A in short, or Lola. I have to say that the naming game at OpenAI has been quite strong lately. This is about multiplayer reinforcement learning, if you will. This new agent does not only care about maximizing its own score, but also inserts a new term into the equation, which is about anticipating the actions of other players in the environment. It is not only possible to do this, but they also show that it can be done in an effective way, and the best part is that it also gives rise to classical strategies that game theory practitioners will immediately recognize. For instance, it can learn Tit4Tet, which is a strategy that mirrors the other player's actions. This means that if the other player is cooperative, it will remain cooperative. And if it gets screwed over, it will also try to screw others over. You'll see in a moment why this is a big deal. The Prisoner's Delama is a game where two criminals are caught and are independently interrogated and have to choose whether they snitch on the other one or not. If anyone snitches out, there will be hell to pay for the other one. If they both affect, they both serve a fair amount of time in prison. The score to be minimized is therefore this time spent in prison. And this strategy is something that we call the Nash equilibrium. In other words, this is the best set of actions if we consider the options of the other actor as well, and expect that they do the same for us. The optimal solution of this game is when both criminals remain silent. And now, the first cool result is that if we run the Prisoner's Delama with two of these new Lola agents, they quickly find the Nash equilibrium. This is great. But wait, we have talked about this Tit4Tet thing. So what's the big deal with that? There is an iterated version of the Prisoner's Delama game where the snitching or cooperating game is replayed many, many times. It is an ideal benchmark because an advanced agent would know that we cooperated the last time, so it is likely that we can park her up this time around too. And now comes the even cooler thing. This is where the Tit4Tet strategy emerges. These Lola agents know that if the previous time they cooperated, they will immediately give each other another chance and again get away with the least amount of prison time. As you can see here, the results vastly outperform other naive agents and from the scores, it seems that previous techniques enter a snitching revenge war against each other and both will serve plenty of time in prison. Other games are also benchmarked against naive and cooperative agents vastly outperforming them. This is a fantastic paper. Make sure to check it out in the video description for more details. I found it to be very readable, so do not despair if your math kung fu is not that strong. Just dive into it. Videos like this tend to get less views because they have less visual fireworks than most other works we are discussing in a series. Fortunately we are super lucky because we have your support on Patreon and can tell these important stories without worrying about going viral. And if you have enjoyed this episode and you feel that eight of these videos a month is worth a dollar, please consider supporting us on Patreon. One back is almost nothing but it keeps the papers coming. Details are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 9.68, "text": " This work doesn't have a ton of viewable footage, but I think it is an absolutely amazing"}, {"start": 9.68, "end": 11.64, "text": " piece of craftsmanship."}, {"start": 11.64, "end": 15.84, "text": " So in the first half of this video, we'll roll some footage from earlier episodes, and"}, {"start": 15.84, "end": 18.400000000000002, "text": " in the second half, you'll see the new stuff."}, {"start": 18.400000000000002, "end": 22.68, "text": " In this series, we often talk about reinforcement learning, which is a learning technique where"}, {"start": 22.68, "end": 28.400000000000002, "text": " an agent chooses an optimal series of actions in an environment to maximize the score."}, {"start": 28.4, "end": 33.48, "text": " Learning computer games is a good example of a clearly defined score that is to be maximized."}, {"start": 33.48, "end": 37.36, "text": " As long as we can say that the higher the score, the better the learning."}, {"start": 37.36, "end": 42.839999999999996, "text": " The concept will work for helicopter control choosing the best spot for Wi-Fi connectivity"}, {"start": 42.839999999999996, "end": 45.36, "text": " or a large variety of different tasks."}, {"start": 45.36, "end": 50.64, "text": " However, what about environments where multiple agents or players are present?"}, {"start": 50.64, "end": 55.36, "text": " Not all games are single-player focused, and not all helicopters have to fly alone."}, {"start": 55.36, "end": 60.8, "text": " So what about that? To deal with cases like this, scientists at OpenAI and the University"}, {"start": 60.8, "end": 67.32, "text": " of Oxford came up with a work by the name Learning with Opponent Learning Awareness, L-O-L-A"}, {"start": 67.32, "end": 69.44, "text": " in short, or Lola."}, {"start": 69.44, "end": 73.6, "text": " I have to say that the naming game at OpenAI has been quite strong lately."}, {"start": 73.6, "end": 76.96000000000001, "text": " This is about multiplayer reinforcement learning, if you will."}, {"start": 76.96000000000001, "end": 82.92, "text": " This new agent does not only care about maximizing its own score, but also inserts a new term"}, {"start": 82.92, "end": 88.4, "text": " into the equation, which is about anticipating the actions of other players in the environment."}, {"start": 88.4, "end": 93.2, "text": " It is not only possible to do this, but they also show that it can be done in an effective"}, {"start": 93.2, "end": 99.2, "text": " way, and the best part is that it also gives rise to classical strategies that game theory"}, {"start": 99.2, "end": 102.0, "text": " practitioners will immediately recognize."}, {"start": 102.0, "end": 106.96000000000001, "text": " For instance, it can learn Tit4Tet, which is a strategy that mirrors the other player's"}, {"start": 106.96000000000001, "end": 107.96000000000001, "text": " actions."}, {"start": 107.96000000000001, "end": 112.12, "text": " This means that if the other player is cooperative, it will remain cooperative."}, {"start": 112.12, "end": 115.96000000000001, "text": " And if it gets screwed over, it will also try to screw others over."}, {"start": 115.96000000000001, "end": 118.60000000000001, "text": " You'll see in a moment why this is a big deal."}, {"start": 118.60000000000001, "end": 123.12, "text": " The Prisoner's Delama is a game where two criminals are caught and are independently"}, {"start": 123.12, "end": 127.68, "text": " interrogated and have to choose whether they snitch on the other one or not."}, {"start": 127.68, "end": 131.24, "text": " If anyone snitches out, there will be hell to pay for the other one."}, {"start": 131.24, "end": 135.32, "text": " If they both affect, they both serve a fair amount of time in prison."}, {"start": 135.32, "end": 138.96, "text": " The score to be minimized is therefore this time spent in prison."}, {"start": 138.96, "end": 142.6, "text": " And this strategy is something that we call the Nash equilibrium."}, {"start": 142.6, "end": 147.20000000000002, "text": " In other words, this is the best set of actions if we consider the options of the other actor"}, {"start": 147.20000000000002, "end": 150.56, "text": " as well, and expect that they do the same for us."}, {"start": 150.56, "end": 154.76000000000002, "text": " The optimal solution of this game is when both criminals remain silent."}, {"start": 154.76000000000002, "end": 159.48000000000002, "text": " And now, the first cool result is that if we run the Prisoner's Delama with two of"}, {"start": 159.48000000000002, "end": 163.64000000000001, "text": " these new Lola agents, they quickly find the Nash equilibrium."}, {"start": 163.64000000000001, "end": 164.64000000000001, "text": " This is great."}, {"start": 164.64000000000001, "end": 167.32, "text": " But wait, we have talked about this Tit4Tet thing."}, {"start": 167.32, "end": 168.88, "text": " So what's the big deal with that?"}, {"start": 168.88, "end": 173.88, "text": " There is an iterated version of the Prisoner's Delama game where the snitching or cooperating"}, {"start": 173.88, "end": 176.79999999999998, "text": " game is replayed many, many times."}, {"start": 176.79999999999998, "end": 181.79999999999998, "text": " It is an ideal benchmark because an advanced agent would know that we cooperated the last"}, {"start": 181.79999999999998, "end": 186.0, "text": " time, so it is likely that we can park her up this time around too."}, {"start": 186.0, "end": 188.12, "text": " And now comes the even cooler thing."}, {"start": 188.12, "end": 191.04, "text": " This is where the Tit4Tet strategy emerges."}, {"start": 191.04, "end": 195.88, "text": " These Lola agents know that if the previous time they cooperated, they will immediately"}, {"start": 195.88, "end": 201.12, "text": " give each other another chance and again get away with the least amount of prison time."}, {"start": 201.12, "end": 207.04, "text": " As you can see here, the results vastly outperform other naive agents and from the scores, it seems"}, {"start": 207.04, "end": 212.56, "text": " that previous techniques enter a snitching revenge war against each other and both will serve"}, {"start": 212.56, "end": 214.76, "text": " plenty of time in prison."}, {"start": 214.76, "end": 219.96, "text": " Other games are also benchmarked against naive and cooperative agents vastly outperforming"}, {"start": 219.96, "end": 220.96, "text": " them."}, {"start": 220.96, "end": 222.8, "text": " This is a fantastic paper."}, {"start": 222.8, "end": 225.96, "text": " Make sure to check it out in the video description for more details."}, {"start": 225.96, "end": 231.68, "text": " I found it to be very readable, so do not despair if your math kung fu is not that strong."}, {"start": 231.68, "end": 233.20000000000002, "text": " Just dive into it."}, {"start": 233.20000000000002, "end": 237.64000000000001, "text": " Videos like this tend to get less views because they have less visual fireworks than most"}, {"start": 237.64000000000001, "end": 240.04000000000002, "text": " other works we are discussing in a series."}, {"start": 240.04000000000002, "end": 245.28, "text": " Fortunately we are super lucky because we have your support on Patreon and can tell these"}, {"start": 245.28, "end": 249.0, "text": " important stories without worrying about going viral."}, {"start": 249.0, "end": 253.32, "text": " And if you have enjoyed this episode and you feel that eight of these videos a month is"}, {"start": 253.32, "end": 257.04, "text": " worth a dollar, please consider supporting us on Patreon."}, {"start": 257.04, "end": 260.52, "text": " One back is almost nothing but it keeps the papers coming."}, {"start": 260.52, "end": 262.44, "text": " Details are available in the video description."}, {"start": 262.44, "end": 282.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9BOdng9MpzU
AI Learns 3D Face Reconstruction | Two Minute Papers #198
The paper "Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression" is available here: http://aaronsplace.co.uk/papers/jackson2017recon/ Online demo: http://cvl-demos.cs.nott.ac.uk/vrn/ Source code: https://github.com/AaronJackson/vrn We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1836445/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir. Now that facial recognition is becoming more and more of a hot topic, let's talk a bit about 3D face reconstruction. This is a problem where we have a 2D input photograph or a video of a person and the goal is to create a piece of 3D geometry from it. To accomplish this, previous works often required a combination of proper alignment of the face, multiple photographs and dense correspondences, which is a fancy name for additional data that identifies the same regions across these photographs. But this new formulation is the holy grail of all possible versions of this problem because it requires nothing else but one 2D photograph. The weapon of choice for this work was a convolutional neural network and the dataset the algorithm was trained on couldn't be simpler. It was given a large database of 2D input image and 3D output geometry pairs. This means that the neural network can look at a lot of these pairs and learn how these input photographs are mapped to 3D geometry. And as you can see, the results are absolutely insane, especially given the fact that it works for arbitrary face positions and many different expressions and even with occlusions. However, this is not your classical convolutional neural network because as we mentioned, the input is 2D and the output is 3D. So the question immediately arises, what kind of data structure should be used for the output? The authors went for a 3D voxel array, which is essentially a cube in which we build up the face from small, identical, Lego pieces. This representation is similar to the terrain in the game Minecraft. Only the resolution of these blocks is finer. The process of guessing how these voxel arrays should look based on the input photograph is referred to in the research community as volumetric regression. This is what this work is about. And now comes the best part. An online demo is also available where we can either try some prepared images or we can also upload our own. So while I run my own experiments, don't leave me out of the good stuff and make sure you post your results in the comment section. The source code is also available for you fellow tinkerers out there. The limitations of this technique includes the inability of detecting expressions that are very far away from the ones seen in the training set. And as you can see in the videos, temporal coherence could also use some help. This means that if we have video input, the reconstruction has some tiny differences in each frame. Maybe a recurrent neural network, like some variant of long short term memory, could address this in the near future. However, those are trickier and more resource intensive to train properly. They are excited to see how these solutions evolve and of course, two minute papers is going to be here for you to talk about some amazing upcoming works. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir."}, {"start": 4.5600000000000005, "end": 9.48, "text": " Now that facial recognition is becoming more and more of a hot topic, let's talk a bit"}, {"start": 9.48, "end": 11.56, "text": " about 3D face reconstruction."}, {"start": 11.56, "end": 16.4, "text": " This is a problem where we have a 2D input photograph or a video of a person and the"}, {"start": 16.4, "end": 20.12, "text": " goal is to create a piece of 3D geometry from it."}, {"start": 20.12, "end": 26.28, "text": " To accomplish this, previous works often required a combination of proper alignment of the face,"}, {"start": 26.28, "end": 31.6, "text": " multiple photographs and dense correspondences, which is a fancy name for additional data"}, {"start": 31.6, "end": 35.0, "text": " that identifies the same regions across these photographs."}, {"start": 35.0, "end": 40.08, "text": " But this new formulation is the holy grail of all possible versions of this problem because"}, {"start": 40.08, "end": 44.08, "text": " it requires nothing else but one 2D photograph."}, {"start": 44.08, "end": 49.040000000000006, "text": " The weapon of choice for this work was a convolutional neural network and the dataset the algorithm"}, {"start": 49.040000000000006, "end": 51.24, "text": " was trained on couldn't be simpler."}, {"start": 51.24, "end": 57.24, "text": " It was given a large database of 2D input image and 3D output geometry pairs."}, {"start": 57.24, "end": 61.88, "text": " This means that the neural network can look at a lot of these pairs and learn how these"}, {"start": 61.88, "end": 65.24000000000001, "text": " input photographs are mapped to 3D geometry."}, {"start": 65.24000000000001, "end": 70.24000000000001, "text": " And as you can see, the results are absolutely insane, especially given the fact that it"}, {"start": 70.24000000000001, "end": 76.48, "text": " works for arbitrary face positions and many different expressions and even with occlusions."}, {"start": 76.48, "end": 81.2, "text": " However, this is not your classical convolutional neural network because as we mentioned,"}, {"start": 81.2, "end": 84.92, "text": " the input is 2D and the output is 3D."}, {"start": 84.92, "end": 89.56, "text": " So the question immediately arises, what kind of data structure should be used for the"}, {"start": 89.56, "end": 90.56, "text": " output?"}, {"start": 90.56, "end": 95.8, "text": " The authors went for a 3D voxel array, which is essentially a cube in which we build up"}, {"start": 95.8, "end": 99.24000000000001, "text": " the face from small, identical, Lego pieces."}, {"start": 99.24000000000001, "end": 103.44, "text": " This representation is similar to the terrain in the game Minecraft."}, {"start": 103.44, "end": 106.12, "text": " Only the resolution of these blocks is finer."}, {"start": 106.12, "end": 110.88, "text": " The process of guessing how these voxel arrays should look based on the input photograph"}, {"start": 110.88, "end": 115.0, "text": " is referred to in the research community as volumetric regression."}, {"start": 115.0, "end": 116.83999999999999, "text": " This is what this work is about."}, {"start": 116.83999999999999, "end": 118.88, "text": " And now comes the best part."}, {"start": 118.88, "end": 124.67999999999999, "text": " An online demo is also available where we can either try some prepared images or we can"}, {"start": 124.67999999999999, "end": 126.11999999999999, "text": " also upload our own."}, {"start": 126.11999999999999, "end": 130.4, "text": " So while I run my own experiments, don't leave me out of the good stuff and make sure"}, {"start": 130.4, "end": 132.96, "text": " you post your results in the comment section."}, {"start": 132.96, "end": 136.8, "text": " The source code is also available for you fellow tinkerers out there."}, {"start": 136.8, "end": 141.52, "text": " The limitations of this technique includes the inability of detecting expressions that"}, {"start": 141.52, "end": 144.88000000000002, "text": " are very far away from the ones seen in the training set."}, {"start": 144.88000000000002, "end": 149.08, "text": " And as you can see in the videos, temporal coherence could also use some help."}, {"start": 149.08, "end": 153.4, "text": " This means that if we have video input, the reconstruction has some tiny differences"}, {"start": 153.4, "end": 154.4, "text": " in each frame."}, {"start": 154.4, "end": 158.96, "text": " Maybe a recurrent neural network, like some variant of long short term memory, could"}, {"start": 158.96, "end": 160.76000000000002, "text": " address this in the near future."}, {"start": 160.76000000000002, "end": 165.0, "text": " However, those are trickier and more resource intensive to train properly."}, {"start": 165.0, "end": 169.56, "text": " They are excited to see how these solutions evolve and of course, two minute papers is going"}, {"start": 169.56, "end": 173.2, "text": " to be here for you to talk about some amazing upcoming works."}, {"start": 173.2, "end": 194.92, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=T_g6S3f0Z5I
AI Learns Video Frame Interpolation | Two Minute Papers #197
The paper "Video Frame Interpolation via Adaptive Separable Convolution" and its source code is available here: https://arxiv.org/abs/1708.01692 https://github.com/sniklaus/pytorch-sepconv Two Minute Papers subreddit: https://www.reddit.com/r/twominutepapers/comments/76j145/ai_learns_video_frame_interpolation_two_minute/ Recommended for you: 1. Separable Subsurface Scattering (with convolutions) - https://www.youtube.com/watch?v=72_iAlYwl0c 2. https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ 3. Rocking Out With Convolutions - https://www.youtube.com/watch?v=JKYQOAZRZu4 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2842576/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fehir. With today's graphics technology, we can enjoy many really smooth videos that were created using 60 frames per second. We love it too, and we hope that you noticed that our last 100 or maybe even more episodes have been available in 60 hertz. However, it often times happens that we are given videos that have anything from 20 to 30 frames per second. This means that if we play them on a 60 FPS timeline, half or even more of these frames will not provide any new information. As we try to slow down the videos for some nice slow motion action, this ratio is even worse, creating an extremely choppy output video. Fortunately, there are techniques that are able to guess what happens in these intermediate frames and give them to us. This is what we call frame interpolation. We have had some previous experiments in this area where we tried to create an amazing slow motion version of a video with some bubbles merging. A simple and standard way of doing frame interpolation is called frame blending, which is a simple averaging of the closest two known frames. The more advanced techniques are optical flow-based, which is a method to determine what motions happened between these two frames and create new images based on that knowledge leading to higher quality results in most cases. This technique uses a convolutional neural network to accomplish something similar, but in the end, it doesn't give us an image but a set of convolution kernels. This is a transformation that is applied to the previous and the next frame to produce an intermediate image. It is not the image itself, but the recipe of how to produce it, if you will. We've had a ton of fun with convolutions earlier where we use them to create beautiful subsurface scattering effects for translucent materials in real time and are more loyal fellow-scalers remembered that at some point I also pulled out my guitar and showed what it would sound like inside a church using a convolution-based reverberation technique. The links are available in the video description, make sure to check them out. Since we have a neural network over here, it goes without saying that the training takes place on a large number of before, after image pairs so that the network is able to produce these convolution kernels. Of course, to validate this algorithm, we also need to have access to a ground truth reference to compare against. We can accomplish this by withholding some information about a few intermediate frames, so we have the true images which the algorithm would have to reproduce without seeing it. Kind of like giving a test to a student when we already know the answers. You can see such a comparison here. And now, let's have a look at these results. As you can see, they are extremely smooth and the technique retains a lot of high frequency details in these images. The videos also seem temporalic adherent, which means that it's devoid of the annoying flickering effect where the reconstruction takes place in a way that's a bit different in each subsequent frame. Kind of that happens here, which is an excellent property of this technique. The Python source code for this technique is available and is free for non-commercial uses. I've put a link in the description if you have given it a try and have some results of your own, make sure to post them in the comments section or our subreddit discussion. The link is available in the description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 10.0, "text": " With today's graphics technology, we can enjoy many really smooth videos that were created"}, {"start": 10.0, "end": 12.08, "text": " using 60 frames per second."}, {"start": 12.08, "end": 17.6, "text": " We love it too, and we hope that you noticed that our last 100 or maybe even more episodes"}, {"start": 17.6, "end": 19.8, "text": " have been available in 60 hertz."}, {"start": 19.8, "end": 25.240000000000002, "text": " However, it often times happens that we are given videos that have anything from 20 to"}, {"start": 25.240000000000002, "end": 26.88, "text": " 30 frames per second."}, {"start": 26.88, "end": 32.12, "text": " This means that if we play them on a 60 FPS timeline, half or even more of these frames"}, {"start": 32.12, "end": 34.4, "text": " will not provide any new information."}, {"start": 34.4, "end": 39.6, "text": " As we try to slow down the videos for some nice slow motion action, this ratio is even"}, {"start": 39.6, "end": 43.0, "text": " worse, creating an extremely choppy output video."}, {"start": 43.0, "end": 47.44, "text": " Fortunately, there are techniques that are able to guess what happens in these intermediate"}, {"start": 47.44, "end": 49.28, "text": " frames and give them to us."}, {"start": 49.28, "end": 51.92, "text": " This is what we call frame interpolation."}, {"start": 51.92, "end": 56.64, "text": " We have had some previous experiments in this area where we tried to create an amazing"}, {"start": 56.64, "end": 60.0, "text": " slow motion version of a video with some bubbles merging."}, {"start": 60.0, "end": 65.56, "text": " A simple and standard way of doing frame interpolation is called frame blending, which is a simple"}, {"start": 65.56, "end": 68.6, "text": " averaging of the closest two known frames."}, {"start": 68.6, "end": 73.24000000000001, "text": " The more advanced techniques are optical flow-based, which is a method to determine what"}, {"start": 73.24000000000001, "end": 78.4, "text": " motions happened between these two frames and create new images based on that knowledge"}, {"start": 78.4, "end": 81.84, "text": " leading to higher quality results in most cases."}, {"start": 81.84, "end": 86.52000000000001, "text": " This technique uses a convolutional neural network to accomplish something similar, but"}, {"start": 86.52000000000001, "end": 91.12, "text": " in the end, it doesn't give us an image but a set of convolution kernels."}, {"start": 91.12, "end": 95.64, "text": " This is a transformation that is applied to the previous and the next frame to produce"}, {"start": 95.64, "end": 97.12, "text": " an intermediate image."}, {"start": 97.12, "end": 101.76, "text": " It is not the image itself, but the recipe of how to produce it, if you will."}, {"start": 101.76, "end": 106.56, "text": " We've had a ton of fun with convolutions earlier where we use them to create beautiful"}, {"start": 106.56, "end": 111.92, "text": " subsurface scattering effects for translucent materials in real time and are more loyal"}, {"start": 111.92, "end": 117.56, "text": " fellow-scalers remembered that at some point I also pulled out my guitar and showed what"}, {"start": 117.56, "end": 122.64, "text": " it would sound like inside a church using a convolution-based reverberation technique."}, {"start": 122.64, "end": 126.52000000000001, "text": " The links are available in the video description, make sure to check them out."}, {"start": 126.52000000000001, "end": 130.84, "text": " Since we have a neural network over here, it goes without saying that the training takes"}, {"start": 130.84, "end": 136.44, "text": " place on a large number of before, after image pairs so that the network is able to"}, {"start": 136.44, "end": 138.6, "text": " produce these convolution kernels."}, {"start": 138.6, "end": 143.44, "text": " Of course, to validate this algorithm, we also need to have access to a ground truth reference"}, {"start": 143.44, "end": 144.84, "text": " to compare against."}, {"start": 144.84, "end": 149.96, "text": " We can accomplish this by withholding some information about a few intermediate frames,"}, {"start": 149.96, "end": 155.16, "text": " so we have the true images which the algorithm would have to reproduce without seeing it."}, {"start": 155.16, "end": 158.8, "text": " Kind of like giving a test to a student when we already know the answers."}, {"start": 158.8, "end": 161.12, "text": " You can see such a comparison here."}, {"start": 161.12, "end": 172.76, "text": " And now, let's have a look at these results."}, {"start": 172.76, "end": 177.68, "text": " As you can see, they are extremely smooth and the technique retains a lot of high frequency"}, {"start": 177.68, "end": 179.36, "text": " details in these images."}, {"start": 179.36, "end": 184.4, "text": " The videos also seem temporalic adherent, which means that it's devoid of the annoying"}, {"start": 184.4, "end": 189.0, "text": " flickering effect where the reconstruction takes place in a way that's a bit different"}, {"start": 189.0, "end": 190.8, "text": " in each subsequent frame."}, {"start": 190.8, "end": 194.52, "text": " Kind of that happens here, which is an excellent property of this technique."}, {"start": 194.52, "end": 198.76000000000002, "text": " The Python source code for this technique is available and is free for non-commercial"}, {"start": 198.76000000000002, "end": 199.76000000000002, "text": " uses."}, {"start": 199.76000000000002, "end": 203.32000000000002, "text": " I've put a link in the description if you have given it a try and have some results"}, {"start": 203.32000000000002, "end": 207.92000000000002, "text": " of your own, make sure to post them in the comments section or our subreddit discussion."}, {"start": 207.92000000000002, "end": 209.8, "text": " The link is available in the description."}, {"start": 209.8, "end": 229.8, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=WT0WtoYz2jE
Deep Learning From Human Preferences | Two Minute Papers #196
The paper "Deep Reinforcement Learning from Human Preferences" is available here: https://arxiv.org/pdf/1706.03741.pdf Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2386034/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In this new age of AI, there is no shortage of articles and discussion about AI safety and, of course, rightfully so. These new learning algorithms started solving problems that were previously thought to be impossible in quick succession. Only 10 years ago, if we told someone about half of the things that have been covered in the last few Two Minute Papers episodes would have been declared insane. And of course, having such powerful algorithms, we have to make sure that they are used for good. This work is a collaboration between open AI and deep-mind security team and is about introducing more human control in reinforcement learning problems. The goal was to learn to perform a backflip through reinforcement learning. This is an algorithm that tries to perform a series of actions to maximize the score, kind of like playing computer games. For instance, in Atari Breakout, if we break a lot of breaks, we get a high score, so we know we did something well. If we see that happening, we keep doing what led to this result. If not, we go back to the drawing board and try something new. But this work is about no ordinary reinforcement learning algorithm because the score to be maximized comes from a human supervisor and we are trying to teach a digital creature to perform a backflip. I particularly like the choice of the backflip here because we can tell when we see one, but the mathematical specification of this in terms of movement actions is rather challenging. This is a problem formulation in which humans can overlook and control the learning process, which is going to be an increasingly important aspect of learning algorithms in the future. The feedback option is very simple, we just specify whether the sequence of motions achieved our prescribed goal or not. Did it fall or did it perform the backflip successfully? After around 700 human feedbacks, the algorithm was able to learn the concept of a backflip, which is quite remarkable given that these binary yes-no scores are extremely difficult to use for any sort of learning. In an earlier episode, we illustrated a similar case with a careless teacher who refuses to give out points for each problem on a written exam and only announces whether we have failed or passed. This clearly makes a dreadful learning experience and it is incredible that the algorithm is still able to learn using these. We provide feedback on less than 1% of the actions the algorithm makes and it can still learn difficult concepts off of these extremely sparse and vague rewards. Low quality teaching leads to high quality learning. How about that? This is significantly more complex than what other techniques were able to learn with human feedback and it works with other games too. A word about the collaboration itself. When a company hires a bunch of super smart scientists and spends a ton of money on research, it is understandable that they want to get an edge through these projects which often means keeping the results for themselves. This leads to excessive secrecy and the lack of collaboration with other groups as everyone wants to keep their cards close to their chest. The fact that such collaborations can happen between these two AR research giants is a testament to how devoted they are to working together and sharing their findings with everyone free of charge for the greater good. Awesome. As the media is all up in arms about the demise of the human race, I feel that it is important to show the other side of the coin as well. We have top people working on AI safety right now. If you wish to help us tell these stories to more and more people, please consider supporting us on Patreon. Details are available in the video description or just click the letter P that appears on the screen in a moment. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.68, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.68, "end": 10.3, "text": " In this new age of AI, there is no shortage of articles and discussion about AI safety"}, {"start": 10.3, "end": 12.24, "text": " and, of course, rightfully so."}, {"start": 12.24, "end": 16.32, "text": " These new learning algorithms started solving problems that were previously thought to"}, {"start": 16.32, "end": 19.2, "text": " be impossible in quick succession."}, {"start": 19.2, "end": 23.64, "text": " Only 10 years ago, if we told someone about half of the things that have been covered"}, {"start": 23.64, "end": 28.12, "text": " in the last few Two Minute Papers episodes would have been declared insane."}, {"start": 28.12, "end": 32.64, "text": " And of course, having such powerful algorithms, we have to make sure that they are used for"}, {"start": 32.64, "end": 33.64, "text": " good."}, {"start": 33.64, "end": 39.24, "text": " This work is a collaboration between open AI and deep-mind security team and is about introducing"}, {"start": 39.24, "end": 42.72, "text": " more human control in reinforcement learning problems."}, {"start": 42.72, "end": 47.120000000000005, "text": " The goal was to learn to perform a backflip through reinforcement learning."}, {"start": 47.120000000000005, "end": 52.08, "text": " This is an algorithm that tries to perform a series of actions to maximize the score,"}, {"start": 52.08, "end": 54.040000000000006, "text": " kind of like playing computer games."}, {"start": 54.04, "end": 58.879999999999995, "text": " For instance, in Atari Breakout, if we break a lot of breaks, we get a high score, so we"}, {"start": 58.879999999999995, "end": 60.6, "text": " know we did something well."}, {"start": 60.6, "end": 64.08, "text": " If we see that happening, we keep doing what led to this result."}, {"start": 64.08, "end": 67.56, "text": " If not, we go back to the drawing board and try something new."}, {"start": 67.56, "end": 72.52, "text": " But this work is about no ordinary reinforcement learning algorithm because the score to be"}, {"start": 72.52, "end": 78.28, "text": " maximized comes from a human supervisor and we are trying to teach a digital creature to"}, {"start": 78.28, "end": 80.08, "text": " perform a backflip."}, {"start": 80.08, "end": 84.92, "text": " I particularly like the choice of the backflip here because we can tell when we see one,"}, {"start": 84.92, "end": 90.28, "text": " but the mathematical specification of this in terms of movement actions is rather challenging."}, {"start": 90.28, "end": 95.44, "text": " This is a problem formulation in which humans can overlook and control the learning process,"}, {"start": 95.44, "end": 100.16, "text": " which is going to be an increasingly important aspect of learning algorithms in the future."}, {"start": 100.16, "end": 104.92, "text": " The feedback option is very simple, we just specify whether the sequence of motions"}, {"start": 104.92, "end": 107.52, "text": " achieved our prescribed goal or not."}, {"start": 107.52, "end": 110.84, "text": " Did it fall or did it perform the backflip successfully?"}, {"start": 110.84, "end": 116.44, "text": " After around 700 human feedbacks, the algorithm was able to learn the concept of a backflip,"}, {"start": 116.44, "end": 121.47999999999999, "text": " which is quite remarkable given that these binary yes-no scores are extremely difficult"}, {"start": 121.47999999999999, "end": 123.72, "text": " to use for any sort of learning."}, {"start": 123.72, "end": 128.4, "text": " In an earlier episode, we illustrated a similar case with a careless teacher who refuses"}, {"start": 128.4, "end": 134.12, "text": " to give out points for each problem on a written exam and only announces whether we have failed"}, {"start": 134.12, "end": 135.12, "text": " or passed."}, {"start": 135.12, "end": 140.08, "text": " This clearly makes a dreadful learning experience and it is incredible that the algorithm is still"}, {"start": 140.08, "end": 142.08, "text": " able to learn using these."}, {"start": 142.08, "end": 147.56, "text": " We provide feedback on less than 1% of the actions the algorithm makes and it can still learn"}, {"start": 147.56, "end": 152.4, "text": " difficult concepts off of these extremely sparse and vague rewards."}, {"start": 152.4, "end": 155.72, "text": " Low quality teaching leads to high quality learning."}, {"start": 155.72, "end": 156.92000000000002, "text": " How about that?"}, {"start": 156.92000000000002, "end": 161.24, "text": " This is significantly more complex than what other techniques were able to learn with human"}, {"start": 161.24, "end": 164.36, "text": " feedback and it works with other games too."}, {"start": 164.36, "end": 166.48000000000002, "text": " A word about the collaboration itself."}, {"start": 166.48000000000002, "end": 171.96, "text": " When a company hires a bunch of super smart scientists and spends a ton of money on research,"}, {"start": 171.96, "end": 176.16000000000003, "text": " it is understandable that they want to get an edge through these projects which often"}, {"start": 176.16000000000003, "end": 178.64000000000001, "text": " means keeping the results for themselves."}, {"start": 178.64000000000001, "end": 183.28000000000003, "text": " This leads to excessive secrecy and the lack of collaboration with other groups as everyone"}, {"start": 183.28000000000003, "end": 185.96, "text": " wants to keep their cards close to their chest."}, {"start": 185.96, "end": 191.8, "text": " The fact that such collaborations can happen between these two AR research giants is a testament"}, {"start": 191.8, "end": 196.64000000000001, "text": " to how devoted they are to working together and sharing their findings with everyone free"}, {"start": 196.64000000000001, "end": 198.64000000000001, "text": " of charge for the greater good."}, {"start": 198.64000000000001, "end": 199.64000000000001, "text": " Awesome."}, {"start": 199.64000000000001, "end": 204.4, "text": " As the media is all up in arms about the demise of the human race, I feel that it is important"}, {"start": 204.4, "end": 206.76000000000002, "text": " to show the other side of the coin as well."}, {"start": 206.76000000000002, "end": 210.16000000000003, "text": " We have top people working on AI safety right now."}, {"start": 210.16000000000003, "end": 214.36, "text": " If you wish to help us tell these stories to more and more people, please consider supporting"}, {"start": 214.36, "end": 215.36, "text": " us on Patreon."}, {"start": 215.36, "end": 220.08, "text": " Details are available in the video description or just click the letter P that appears on"}, {"start": 220.08, "end": 221.60000000000002, "text": " the screen in a moment."}, {"start": 221.6, "end": 225.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2VyhmbEjs9A
AI Learns To Recreate Computer Games | Two Minute Papers #195
The paper "Game Engine Learning from Video" is available here: https://www.cc.gatech.edu/~riedl/pubs/ijcai17.pdf Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers Generative Adversarial Networks (GANs): https://www.youtube.com/watch?v=pqkpIfu36Os Generative Latent Optimization (GLO): https://www.youtube.com/watch?v=aR6M0MQBo2w We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1558063/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu
Dear Fellow Scholars, this is 2 Minute Papers with Karo Ejola Ifehir. In most video games that we've seen for at least a few moments, we'll learn to anticipate what is going to happen in the next second and even more, if given the patience and skills we could attempt to recreate parts of the game itself. And what you see here in this work is actually even better because it requires neither the patience nor the skills to do that. So here's the million dollar idea. Let's have a learning algorithm look at some video game footage and then ask it to recreate that so we can indulge in playing it. The concept is demonstrated on the Super Mario game and later you will also see some results with the millennial childhood favorite, Mega Man. There are many previous works that hook into the source code of these games and try to read and predict what happens next by reading the code level instructions. But not in this case because this technique looks at the video output and the learning takes place on the level of pixels. Therefore no access to the inner workings of the game is necessary. The algorithm is given two things for the learning process. One, a sprite palette that contains all the possible elements that can appear in the game, including landscape tiles, enemies, coins and so on. And two, we also provide an input video sequence with one playthrough of the game to demonstrate the mechanics and possible interactions between these game elements. The video is a series of frames from which the technique learns how a frame can be advanced to the next one. After it has been exposed to enough training samples, it will be able to do this prediction by itself on unknown frames that it hasn't seen before. This pretty much means that we can start playing the game that it tries to mimic. And there are similarities across many games that could be exploited and how in the learning algorithms with knowledge reused from other games, making them able to recreate even higher quality computer games, even in cases where a given scenario hasn't played out in the training footage. It used to be the privilege of computer graphics researchers to play video games during work hours, but apparently scientists in machine learning also caught up in this regard. Way to go. A word about limitations, as the predictions are not very speedy and are based off of a set of facts learned from the video sequences, it is a question as to how well this technique would generalize to more complex 3D video games. As almost all research works, this is a stepping stone, but a very important one at that, as this is a proof of concept for a really cool idea. You know the drill, a couple papers down the line and will see the idea significantly improved. The results are clearly not perfect, but it is a nice demonstration of a new concept, knowing the rate of progress in machine learning research you will very soon see some absolutely unreal results. What's even more, I expect that new levels, enemy types and mechanics will soon be synthesized to already existing games via generative adversarial networks or generative latent optimization. If you would like to hear more about these, as always, the links are available in the video description. Also, if you enjoyed this episode, please make sure to help us tell these incredible stories to more and more people by supporting us on Patreon. Your support has always been absolutely amazing. Details are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.68, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Ejola Ifehir."}, {"start": 4.68, "end": 9.64, "text": " In most video games that we've seen for at least a few moments, we'll learn to anticipate"}, {"start": 9.64, "end": 15.08, "text": " what is going to happen in the next second and even more, if given the patience and skills"}, {"start": 15.08, "end": 18.96, "text": " we could attempt to recreate parts of the game itself."}, {"start": 18.96, "end": 24.080000000000002, "text": " And what you see here in this work is actually even better because it requires neither the"}, {"start": 24.080000000000002, "end": 26.84, "text": " patience nor the skills to do that."}, {"start": 26.84, "end": 28.92, "text": " So here's the million dollar idea."}, {"start": 28.92, "end": 34.28, "text": " Let's have a learning algorithm look at some video game footage and then ask it to recreate"}, {"start": 34.28, "end": 36.760000000000005, "text": " that so we can indulge in playing it."}, {"start": 36.760000000000005, "end": 42.480000000000004, "text": " The concept is demonstrated on the Super Mario game and later you will also see some results"}, {"start": 42.480000000000004, "end": 45.68000000000001, "text": " with the millennial childhood favorite, Mega Man."}, {"start": 45.68000000000001, "end": 50.92, "text": " There are many previous works that hook into the source code of these games and try to read"}, {"start": 50.92, "end": 55.32000000000001, "text": " and predict what happens next by reading the code level instructions."}, {"start": 55.32, "end": 59.92, "text": " But not in this case because this technique looks at the video output and the learning takes"}, {"start": 59.92, "end": 62.24, "text": " place on the level of pixels."}, {"start": 62.24, "end": 66.32, "text": " Therefore no access to the inner workings of the game is necessary."}, {"start": 66.32, "end": 69.84, "text": " The algorithm is given two things for the learning process."}, {"start": 69.84, "end": 75.44, "text": " One, a sprite palette that contains all the possible elements that can appear in the game,"}, {"start": 75.44, "end": 79.44, "text": " including landscape tiles, enemies, coins and so on."}, {"start": 79.44, "end": 84.72, "text": " And two, we also provide an input video sequence with one playthrough of the game to demonstrate"}, {"start": 84.72, "end": 88.52, "text": " the mechanics and possible interactions between these game elements."}, {"start": 88.52, "end": 93.88, "text": " The video is a series of frames from which the technique learns how a frame can be advanced"}, {"start": 93.88, "end": 95.2, "text": " to the next one."}, {"start": 95.2, "end": 99.48, "text": " After it has been exposed to enough training samples, it will be able to do this prediction"}, {"start": 99.48, "end": 103.6, "text": " by itself on unknown frames that it hasn't seen before."}, {"start": 103.6, "end": 107.68, "text": " This pretty much means that we can start playing the game that it tries to mimic."}, {"start": 107.68, "end": 112.28, "text": " And there are similarities across many games that could be exploited and how in the learning"}, {"start": 112.28, "end": 117.28, "text": " algorithms with knowledge reused from other games, making them able to recreate even"}, {"start": 117.28, "end": 122.32000000000001, "text": " higher quality computer games, even in cases where a given scenario hasn't played out in"}, {"start": 122.32000000000001, "end": 123.6, "text": " the training footage."}, {"start": 123.6, "end": 128.44, "text": " It used to be the privilege of computer graphics researchers to play video games during work"}, {"start": 128.44, "end": 133.64, "text": " hours, but apparently scientists in machine learning also caught up in this regard."}, {"start": 133.64, "end": 134.64, "text": " Way to go."}, {"start": 134.64, "end": 139.32, "text": " A word about limitations, as the predictions are not very speedy and are based off of a"}, {"start": 139.32, "end": 144.4, "text": " set of facts learned from the video sequences, it is a question as to how well this technique"}, {"start": 144.4, "end": 147.95999999999998, "text": " would generalize to more complex 3D video games."}, {"start": 147.95999999999998, "end": 152.88, "text": " As almost all research works, this is a stepping stone, but a very important one at that, as"}, {"start": 152.88, "end": 156.2, "text": " this is a proof of concept for a really cool idea."}, {"start": 156.2, "end": 160.48, "text": " You know the drill, a couple papers down the line and will see the idea significantly"}, {"start": 160.48, "end": 161.48, "text": " improved."}, {"start": 161.48, "end": 166.0, "text": " The results are clearly not perfect, but it is a nice demonstration of a new concept,"}, {"start": 166.0, "end": 171.04, "text": " knowing the rate of progress in machine learning research you will very soon see some absolutely"}, {"start": 171.04, "end": 172.56, "text": " unreal results."}, {"start": 172.56, "end": 178.28, "text": " What's even more, I expect that new levels, enemy types and mechanics will soon be synthesized"}, {"start": 178.28, "end": 184.56, "text": " to already existing games via generative adversarial networks or generative latent optimization."}, {"start": 184.56, "end": 188.36, "text": " If you would like to hear more about these, as always, the links are available in the"}, {"start": 188.36, "end": 189.36, "text": " video description."}, {"start": 189.36, "end": 194.4, "text": " Also, if you enjoyed this episode, please make sure to help us tell these incredible stories"}, {"start": 194.4, "end": 197.64000000000001, "text": " to more and more people by supporting us on Patreon."}, {"start": 197.64000000000001, "end": 200.76000000000002, "text": " Your support has always been absolutely amazing."}, {"start": 200.76000000000002, "end": 203.0, "text": " Details are available in the video description."}, {"start": 203.0, "end": 224.12, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=nsuAQcvafCs
Audio To Obama: AI Learns Lip Sync from Audio | Two Minute Papers #194
The paper "Synthesizing Obama: Learning Lip Sync from Audio" is available here: https://grail.cs.washington.edu/projects/AudioToObama/ Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers Patreon notes: https://www.patreon.com/TwoMinutePapers/posts?tag=what%27s%20new Recommended for you: WaveNet: https://www.youtube.com/watch?v=CqFIVCD1WWo Face2face: https://www.youtube.com/watch?v=_S1lyQbbJM4 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1174489/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Deepfake #Face2Face #Audio2Obama
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efeira. This work is doing something truly remarkable. If we have a piece of audio of a real person speaking and a target video footage, it will retime and change the video so that the target person appears to be uttering these words. This is different from what we've seen a few episodes ago where scientists at Nvidia worked on synthesizing lip-sync geometry for digital characters solely relying on audio footage. The results were quite amazing. Have a look. Listen up. The train your well houses is our main objective. Well, like say, you and your men will do that. You have to go in and out very quick. This was great for animating digital characters when all we have is sound. But this time around, we are interested in reanimating the footage of real existing people. The prerequisite to do this with a learning algorithm is to have a ton of data to train on, which we have in our possession as there are many hours of footage of the former president speaking during his weekly address. This is done using a recurrent neural network. Recurrent neural networks are learning algorithms where the inputs and outputs can be sequences of data. So here, in the first part, the input can be a piece of audio with the person saying something, and it is able to synthesize the appropriate mouth shapes and their evolution over time to match the audio. The next step is creating an actual mouth texture from this rough shape that comes out from the learning algorithm, which is then used as an input to the synthesizer. Furthermore, the algorithm is also endowed with an additional pose matching module to make sure that the synthesized mouth texture aligns with the posture of the head properly. The final re-timing step makes sure that the head motions follow the speech correctly. If you have any doubts whether this is required, here are some results with and without the re-timing step. I grew up without my father around. Without re-timing, he moves randomly and appears a natural. Well, I wonder what my life would have been like if he had been a greater presence. I've also tried extra hard to be a good dad for my own daughters. Like all dads, I worry about my girls' safety all the time. You can see that this indeed substantially enhances the realism of the final footage. Even better, when combined with Google DeepMind's wave net, given enough training data, we could skip the audio footage altogether, and just write a piece of text making Obama or someone else say what we have written. The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser. Aspects of the Sublime in English poetry and painting, 1770-1850. There are also a ton of other details to be worked out. For instance, there are cases where the mouth moves before the person starts to speak, which is to be taken into consideration. The dreaded arms and eyes are classical examples of that. Small confines of the legal community. I think it's real important to keep the focus on the broader world out there and see that for a lot of kids, the doors that have been opened to me aren't open to them. There's also an important jaw correction step and more. This is a brilliant piece of work with many non-trivial decisions that are described in the paper. Make sure to have a look at it for details, as always, the link is available in the video description. The results are also compared to the face-to-face paper from last year that we also covered in the series. It is absolutely insane to see this rate of progress over the laps of only one year. If you have enjoyed this episode and you feel that eight of these videos a month is worth a dollar, please consider supporting us on Patreon. You can pick up some really cool perks there and it is also a great deal of help for us to make better videos for you in the future. Earlier, I also wrote a few words about the changes we were able to make because of your amazing support. Details are available in the description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efeira."}, {"start": 4.4, "end": 7.4, "text": " This work is doing something truly remarkable."}, {"start": 7.4, "end": 12.5, "text": " If we have a piece of audio of a real person speaking and a target video footage,"}, {"start": 12.5, "end": 28.5, "text": " it will retime and change the video so that the target person appears to be uttering these words."}, {"start": 42.5, "end": 68.0, "text": " This is different from what we've seen a few episodes ago where scientists at Nvidia worked on synthesizing lip-sync geometry for digital characters solely relying on audio footage."}, {"start": 68.0, "end": 71.0, "text": " The results were quite amazing. Have a look."}, {"start": 71.0, "end": 76.5, "text": " Listen up. The train your well houses is our main objective."}, {"start": 76.5, "end": 83.0, "text": " Well, like say, you and your men will do that. You have to go in and out very quick."}, {"start": 83.0, "end": 88.0, "text": " This was great for animating digital characters when all we have is sound."}, {"start": 88.0, "end": 93.0, "text": " But this time around, we are interested in reanimating the footage of real existing people."}, {"start": 93.0, "end": 98.5, "text": " The prerequisite to do this with a learning algorithm is to have a ton of data to train on,"}, {"start": 98.5, "end": 105.5, "text": " which we have in our possession as there are many hours of footage of the former president speaking during his weekly address."}, {"start": 105.5, "end": 108.5, "text": " This is done using a recurrent neural network."}, {"start": 108.5, "end": 114.5, "text": " Recurrent neural networks are learning algorithms where the inputs and outputs can be sequences of data."}, {"start": 114.5, "end": 120.0, "text": " So here, in the first part, the input can be a piece of audio with the person saying something,"}, {"start": 120.0, "end": 126.0, "text": " and it is able to synthesize the appropriate mouth shapes and their evolution over time to match the audio."}, {"start": 126.0, "end": 132.5, "text": " The next step is creating an actual mouth texture from this rough shape that comes out from the learning algorithm,"}, {"start": 132.5, "end": 135.5, "text": " which is then used as an input to the synthesizer."}, {"start": 135.5, "end": 145.5, "text": " Furthermore, the algorithm is also endowed with an additional pose matching module to make sure that the synthesized mouth texture aligns with the posture of the head properly."}, {"start": 145.5, "end": 150.5, "text": " The final re-timing step makes sure that the head motions follow the speech correctly."}, {"start": 150.5, "end": 157.5, "text": " If you have any doubts whether this is required, here are some results with and without the re-timing step."}, {"start": 157.5, "end": 159.5, "text": " I grew up without my father around."}, {"start": 159.5, "end": 163.5, "text": " Without re-timing, he moves randomly and appears a natural."}, {"start": 163.5, "end": 168.0, "text": " Well, I wonder what my life would have been like if he had been a greater presence."}, {"start": 168.0, "end": 172.5, "text": " I've also tried extra hard to be a good dad for my own daughters."}, {"start": 172.5, "end": 176.5, "text": " Like all dads, I worry about my girls' safety all the time."}, {"start": 176.5, "end": 181.5, "text": " You can see that this indeed substantially enhances the realism of the final footage."}, {"start": 181.5, "end": 189.5, "text": " Even better, when combined with Google DeepMind's wave net, given enough training data, we could skip the audio footage altogether,"}, {"start": 189.5, "end": 194.5, "text": " and just write a piece of text making Obama or someone else say what we have written."}, {"start": 194.5, "end": 200.5, "text": " The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser."}, {"start": 200.5, "end": 205.5, "text": " Aspects of the Sublime in English poetry and painting, 1770-1850."}, {"start": 205.5, "end": 208.5, "text": " There are also a ton of other details to be worked out."}, {"start": 208.5, "end": 215.5, "text": " For instance, there are cases where the mouth moves before the person starts to speak, which is to be taken into consideration."}, {"start": 215.5, "end": 219.5, "text": " The dreaded arms and eyes are classical examples of that."}, {"start": 219.5, "end": 221.5, "text": " Small confines of the legal community."}, {"start": 221.5, "end": 227.5, "text": " I think it's real important to keep the focus on the broader world out there and see that"}, {"start": 227.5, "end": 233.5, "text": " for a lot of kids, the doors that have been opened to me aren't open to them."}, {"start": 233.5, "end": 237.5, "text": " There's also an important jaw correction step and more."}, {"start": 237.5, "end": 242.5, "text": " This is a brilliant piece of work with many non-trivial decisions that are described in the paper."}, {"start": 242.5, "end": 247.5, "text": " Make sure to have a look at it for details, as always, the link is available in the video description."}, {"start": 247.5, "end": 253.5, "text": " The results are also compared to the face-to-face paper from last year that we also covered in the series."}, {"start": 253.5, "end": 259.5, "text": " It is absolutely insane to see this rate of progress over the laps of only one year."}, {"start": 259.5, "end": 264.5, "text": " If you have enjoyed this episode and you feel that eight of these videos a month is worth a dollar,"}, {"start": 264.5, "end": 266.5, "text": " please consider supporting us on Patreon."}, {"start": 266.5, "end": 273.5, "text": " You can pick up some really cool perks there and it is also a great deal of help for us to make better videos for you in the future."}, {"start": 273.5, "end": 279.5, "text": " Earlier, I also wrote a few words about the changes we were able to make because of your amazing support."}, {"start": 279.5, "end": 281.5, "text": " Details are available in the description."}, {"start": 281.5, "end": 291.5, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=GNx8rgNcw5c
Light Transport on Specular Microstructure | Two Minute Papers #193
The paper "Position-Normal Distributions for Efficient Rendering of Specular Microstructure" is available here: http://people.eecs.berkeley.edu/~lingqi/publications/paper_glints2.pdf http://people.eecs.berkeley.edu/~lingqi/ Vienna Rendering course: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2796240/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizona Ifeher. Using light simulation programs, we are able to populate a virtual scene with objects, assign material models to them, and create a beautiful, photorealistic image of this digital scene. The theory of light transport follows the physics of light. Therefore, these images should be indistinguishable from reality, and fortunately, they often are. However, there are some cases when we can tell them apart from real images. And the reason for this is not the inaccuracies of the light transport algorithm, but the oversimplifications used in the geometry and material models. The main issue is that in our mathematical models, these materials are often defined to be too perfect. But in reality, metals are rarely perfectly polished, and the classical material models in light transport can rarely capture these microstructures that make surfaces imperfect. This algorithm is about rendering new material models that can represent the imperfect materials like scratched coating and metallic flakes on car paint, a leather sofa, wooden floor, or a teapot. Just look at these phenomenal images. Previous techniques exist to solve this problem, but they take extremely long and are typically limited to flat surfaces. One of the main difficulties of the problem is that these tiny flakes and scratches are typically orders of magnitude smaller than a pixel, and therefore, they require a lot of care and additional computations to render. This work provides an exact, closed-form solution to this that is highly efficient to render. It is over 100 times faster than previous techniques, has less limitations as it works on curved surfaces, and it only takes 40% longer to render it compared to the standard perfect material models. Only 40% more time for this? Sign me up! It is truly incredible that we can create images of this sophistication using science. It is also highly practical as it can be plugged in as a standard material model without any crazy modifications to the simulation program. Looking forward to seeing some amazing animations using more and more realistic material models in the near future. If you would like to learn more about light simulations, I have been holding a full master-level course on it at the Technical University of Vienna for a few years now. After a while, I get a strong feeling that the teachings shouldn't only be available for the lucky 30 people in the classroom who can afford a college education. The teachings should be available for everyone. And now, the entirety of this course is available free of charge for everyone where we learn the theory of light from scratch and implement a really cool light simulation program together. If you want to solve a few infinite dimensional integrals with me, give it a go. As always, details are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.58, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizona Ifeher."}, {"start": 4.58, "end": 9.76, "text": " Using light simulation programs, we are able to populate a virtual scene with objects,"}, {"start": 9.76, "end": 15.72, "text": " assign material models to them, and create a beautiful, photorealistic image of this digital scene."}, {"start": 15.72, "end": 19.44, "text": " The theory of light transport follows the physics of light."}, {"start": 19.44, "end": 25.46, "text": " Therefore, these images should be indistinguishable from reality, and fortunately, they often are."}, {"start": 25.46, "end": 29.76, "text": " However, there are some cases when we can tell them apart from real images."}, {"start": 29.76, "end": 33.92, "text": " And the reason for this is not the inaccuracies of the light transport algorithm,"}, {"start": 33.92, "end": 38.44, "text": " but the oversimplifications used in the geometry and material models."}, {"start": 38.44, "end": 44.800000000000004, "text": " The main issue is that in our mathematical models, these materials are often defined to be too perfect."}, {"start": 44.800000000000004, "end": 48.32, "text": " But in reality, metals are rarely perfectly polished,"}, {"start": 48.32, "end": 55.52, "text": " and the classical material models in light transport can rarely capture these microstructures that make surfaces imperfect."}, {"start": 55.52, "end": 61.36, "text": " This algorithm is about rendering new material models that can represent the imperfect materials"}, {"start": 61.36, "end": 68.4, "text": " like scratched coating and metallic flakes on car paint, a leather sofa, wooden floor, or a teapot."}, {"start": 68.4, "end": 70.80000000000001, "text": " Just look at these phenomenal images."}, {"start": 70.80000000000001, "end": 77.88, "text": " Previous techniques exist to solve this problem, but they take extremely long and are typically limited to flat surfaces."}, {"start": 77.88, "end": 82.16, "text": " One of the main difficulties of the problem is that these tiny flakes and scratches"}, {"start": 82.16, "end": 88.0, "text": " are typically orders of magnitude smaller than a pixel, and therefore, they require a lot of care"}, {"start": 88.0, "end": 90.16, "text": " and additional computations to render."}, {"start": 90.16, "end": 95.6, "text": " This work provides an exact, closed-form solution to this that is highly efficient to render."}, {"start": 95.6, "end": 103.03999999999999, "text": " It is over 100 times faster than previous techniques, has less limitations as it works on curved surfaces,"}, {"start": 103.03999999999999, "end": 109.52, "text": " and it only takes 40% longer to render it compared to the standard perfect material models."}, {"start": 109.52, "end": 113.03999999999999, "text": " Only 40% more time for this? Sign me up!"}, {"start": 113.03999999999999, "end": 118.32, "text": " It is truly incredible that we can create images of this sophistication using science."}, {"start": 118.32, "end": 123.03999999999999, "text": " It is also highly practical as it can be plugged in as a standard material model"}, {"start": 123.03999999999999, "end": 126.32, "text": " without any crazy modifications to the simulation program."}, {"start": 126.32, "end": 131.6, "text": " Looking forward to seeing some amazing animations using more and more realistic material models"}, {"start": 131.6, "end": 132.64, "text": " in the near future."}, {"start": 132.64, "end": 135.35999999999999, "text": " If you would like to learn more about light simulations,"}, {"start": 135.36, "end": 141.60000000000002, "text": " I have been holding a full master-level course on it at the Technical University of Vienna for a few years now."}, {"start": 141.60000000000002, "end": 146.32000000000002, "text": " After a while, I get a strong feeling that the teachings shouldn't only be available"}, {"start": 146.32000000000002, "end": 150.64000000000001, "text": " for the lucky 30 people in the classroom who can afford a college education."}, {"start": 150.64000000000001, "end": 153.44000000000003, "text": " The teachings should be available for everyone."}, {"start": 153.44000000000003, "end": 158.4, "text": " And now, the entirety of this course is available free of charge for everyone"}, {"start": 158.4, "end": 164.24, "text": " where we learn the theory of light from scratch and implement a really cool light simulation program together."}, {"start": 164.24, "end": 168.56, "text": " If you want to solve a few infinite dimensional integrals with me, give it a go."}, {"start": 168.56, "end": 171.52, "text": " As always, details are available in the video description."}, {"start": 171.52, "end": 195.44, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Dvd1jQe3pq0
Hindsight Experience Replay | Two Minute Papers #192
The paper "Hindsight Experience Replay" is available here: https://arxiv.org/pdf/1707.01495.pdf Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers Recommended for you: Deep Reinforcement Terrain Learning - https://www.youtube.com/watch?v=wBrwN4dS-DA&t=109s Digital Creatures Learn To Walk - https://www.youtube.com/watch?v=kQ2bqz3HPJE Task-based Animation of Virtual Characters - https://www.youtube.com/watch?v=ZHoNpxUHewQ Real-Time Character Control With Phase-Functioned Neural Networks - https://www.youtube.com/watch?v=wlndIQHtiFw DeepMind's AI Learns Locomotion From Scratch - https://www.youtube.com/watch?v=14zkfDTN_qo We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1193318/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Reinforcement learning is an awesome algorithm that is able to play computer games, navigate helicopters, hit a baseball, or even defeat gold champions when combined together with a neural network and Monte Carlo research. It is a quite general algorithm that is able to take on a variety of difficult problems that involve observing an environment and coming up with a series of actions to maximize the score. In a previous episode, we had a look at deep minds algorithm where a set of movement actions had to be chosen to navigate in a difficult 3D environment efficiently. The score to be maximized was the distance measured from the starting point, the further our character went, the higher score it was given, and it has successfully learned the concept of locomotion. Really cool. A prerequisite for a reinforcement learner to work properly is that it has to be given informative reward signals. For instance, if we go to a written exam as an output, we would like to get a detailed breakdown of the number of points we got for each problem. This way we know where we did well and which kinds of problems need some more work. However, imagine having a really careless teacher who never tells us the points but would only tell us whether we have failed or passed. No explanation, no points for individual tasks, no telling whether we failed by a lot or just a tiny bit. Nothing. Just attempt, we failed. Next time, we failed again, and again, and again, and again. Now this would be a dreadful learning experience because we would have absolutely no idea what to improve. Clearly, this teacher would have to be fired. However, when formulating a reinforcement learning problem, instead of using more informative scores, it is much easier to just tell whether the algorithm was successful or not. It is very convenient for us to be this careless teacher. Otherwise, what score would make sense for a helicopter control problem when we almost crash into a tree? This part is called reward engineering, and the main issue is that we have to adapt the problem to the algorithm, where the best would be if the algorithm would adapt to the problem. This has been a long-standing problem in reinforcement learning research and the potential solution would open up the possibility of solving even harder and more interesting problems with learning algorithms. And this is exactly what researchers at OpenAI tried to solve by introducing hindsight experience replay, HER or HER in short. Very apt. This algorithm takes on problems where the scores are binary, which means that it either passed or failed the prescribed task. A classic careless teacher scenario. And these rewards are not only binary, but very sparse as well, which further exacerbates the difficulty of the problem. In the video, you can see a comparison with a previous algorithm with and without the HER extension. The higher the number of epochs you see above, the longer the algorithm was able to train. The incredible thing here is that it is able to achieve a goal even if it had never been able to reach it during training. The key idea is that we can learn just as much from undesirable outcomes as from desirable ones. Let me quote the authors. The authors are not only learning how to play hockey and are trying to shoot a puck into a net. You hit the puck, but it misses the net on the right side. The conclusion drawn by a standard reinforcement learning algorithm in such a situation would be that the performed sequence of actions does not lead to a successful shot and little if anything would be learned. It is however possible to draw another conclusion, namely that this sequence of actions would be successful if the net had been placed further to the right. They have achieved this by storing and replaying previous experiences with different potential goals. As always, the details are available in the paper, make sure to have a look. Now it is always good to test things out whether the whole system works well in software. However, its usefulness has been demonstrated by deploying it on a real robot arm. You can see the goal written on the screen alongside with the results. A really cool piece of work that can potentially open up new ways of thinking about reinforcement learning. After all, it's great to have learning algorithms that are so good they can solve problems that we formulate in such a lazy way that we'd have to be fired. And here's a quick question. Do you think 8 of these episodes a month is worth a dollar? If you have enjoyed this episode and your answer is yes, please consider supporting us on Patreon. Details are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.5200000000000005, "end": 9.92, "text": " Reinforcement learning is an awesome algorithm that is able to play computer games, navigate"}, {"start": 9.92, "end": 15.48, "text": " helicopters, hit a baseball, or even defeat gold champions when combined together with"}, {"start": 15.48, "end": 18.080000000000002, "text": " a neural network and Monte Carlo research."}, {"start": 18.080000000000002, "end": 23.28, "text": " It is a quite general algorithm that is able to take on a variety of difficult problems"}, {"start": 23.28, "end": 28.68, "text": " that involve observing an environment and coming up with a series of actions to maximize"}, {"start": 28.68, "end": 29.68, "text": " the score."}, {"start": 29.68, "end": 34.44, "text": " In a previous episode, we had a look at deep minds algorithm where a set of movement actions"}, {"start": 34.44, "end": 39.2, "text": " had to be chosen to navigate in a difficult 3D environment efficiently."}, {"start": 39.2, "end": 44.04, "text": " The score to be maximized was the distance measured from the starting point, the further"}, {"start": 44.04, "end": 48.480000000000004, "text": " our character went, the higher score it was given, and it has successfully learned"}, {"start": 48.480000000000004, "end": 50.480000000000004, "text": " the concept of locomotion."}, {"start": 50.480000000000004, "end": 51.480000000000004, "text": " Really cool."}, {"start": 51.480000000000004, "end": 56.24, "text": " A prerequisite for a reinforcement learner to work properly is that it has to be given"}, {"start": 56.24, "end": 58.519999999999996, "text": " informative reward signals."}, {"start": 58.52, "end": 63.2, "text": " For instance, if we go to a written exam as an output, we would like to get a detailed"}, {"start": 63.2, "end": 67.04, "text": " breakdown of the number of points we got for each problem."}, {"start": 67.04, "end": 71.60000000000001, "text": " This way we know where we did well and which kinds of problems need some more work."}, {"start": 71.60000000000001, "end": 76.80000000000001, "text": " However, imagine having a really careless teacher who never tells us the points but would"}, {"start": 76.80000000000001, "end": 80.2, "text": " only tell us whether we have failed or passed."}, {"start": 80.2, "end": 85.56, "text": " No explanation, no points for individual tasks, no telling whether we failed by a lot or"}, {"start": 85.56, "end": 87.24000000000001, "text": " just a tiny bit."}, {"start": 87.24000000000001, "end": 88.24000000000001, "text": " Nothing."}, {"start": 88.24, "end": 89.96, "text": " Just attempt, we failed."}, {"start": 89.96, "end": 93.52, "text": " Next time, we failed again, and again, and again, and again."}, {"start": 93.52, "end": 98.47999999999999, "text": " Now this would be a dreadful learning experience because we would have absolutely no idea what"}, {"start": 98.47999999999999, "end": 99.47999999999999, "text": " to improve."}, {"start": 99.47999999999999, "end": 102.19999999999999, "text": " Clearly, this teacher would have to be fired."}, {"start": 102.19999999999999, "end": 107.0, "text": " However, when formulating a reinforcement learning problem, instead of using more informative"}, {"start": 107.0, "end": 112.52, "text": " scores, it is much easier to just tell whether the algorithm was successful or not."}, {"start": 112.52, "end": 116.03999999999999, "text": " It is very convenient for us to be this careless teacher."}, {"start": 116.04, "end": 120.68, "text": " Otherwise, what score would make sense for a helicopter control problem when we almost"}, {"start": 120.68, "end": 122.2, "text": " crash into a tree?"}, {"start": 122.2, "end": 127.28, "text": " This part is called reward engineering, and the main issue is that we have to adapt the"}, {"start": 127.28, "end": 132.92000000000002, "text": " problem to the algorithm, where the best would be if the algorithm would adapt to the problem."}, {"start": 132.92000000000002, "end": 138.0, "text": " This has been a long-standing problem in reinforcement learning research and the potential solution"}, {"start": 138.0, "end": 143.36, "text": " would open up the possibility of solving even harder and more interesting problems with"}, {"start": 143.36, "end": 144.96, "text": " learning algorithms."}, {"start": 144.96, "end": 151.32000000000002, "text": " And this is exactly what researchers at OpenAI tried to solve by introducing hindsight experience"}, {"start": 151.32000000000002, "end": 155.84, "text": " replay, HER or HER in short."}, {"start": 155.84, "end": 156.84, "text": " Very apt."}, {"start": 156.84, "end": 162.28, "text": " This algorithm takes on problems where the scores are binary, which means that it either passed"}, {"start": 162.28, "end": 164.44, "text": " or failed the prescribed task."}, {"start": 164.44, "end": 167.12, "text": " A classic careless teacher scenario."}, {"start": 167.12, "end": 172.48000000000002, "text": " And these rewards are not only binary, but very sparse as well, which further exacerbates"}, {"start": 172.48000000000002, "end": 174.12, "text": " the difficulty of the problem."}, {"start": 174.12, "end": 179.0, "text": " In the video, you can see a comparison with a previous algorithm with and without the"}, {"start": 179.0, "end": 180.44, "text": " HER extension."}, {"start": 180.44, "end": 185.36, "text": " The higher the number of epochs you see above, the longer the algorithm was able to train."}, {"start": 185.36, "end": 190.6, "text": " The incredible thing here is that it is able to achieve a goal even if it had never been"}, {"start": 190.6, "end": 192.72, "text": " able to reach it during training."}, {"start": 192.72, "end": 198.56, "text": " The key idea is that we can learn just as much from undesirable outcomes as from desirable"}, {"start": 198.56, "end": 199.56, "text": " ones."}, {"start": 199.56, "end": 201.08, "text": " Let me quote the authors."}, {"start": 201.08, "end": 205.16000000000003, "text": " The authors are not only learning how to play hockey and are trying to shoot a puck into"}, {"start": 205.16000000000003, "end": 206.16000000000003, "text": " a net."}, {"start": 206.16000000000003, "end": 208.92000000000002, "text": " You hit the puck, but it misses the net on the right side."}, {"start": 208.92000000000002, "end": 213.76000000000002, "text": " The conclusion drawn by a standard reinforcement learning algorithm in such a situation would"}, {"start": 213.76000000000002, "end": 219.4, "text": " be that the performed sequence of actions does not lead to a successful shot and little if"}, {"start": 219.4, "end": 221.04000000000002, "text": " anything would be learned."}, {"start": 221.04000000000002, "end": 226.4, "text": " It is however possible to draw another conclusion, namely that this sequence of actions would"}, {"start": 226.4, "end": 230.20000000000002, "text": " be successful if the net had been placed further to the right."}, {"start": 230.2, "end": 235.32, "text": " They have achieved this by storing and replaying previous experiences with different potential"}, {"start": 235.32, "end": 236.32, "text": " goals."}, {"start": 236.32, "end": 239.92, "text": " As always, the details are available in the paper, make sure to have a look."}, {"start": 239.92, "end": 245.07999999999998, "text": " Now it is always good to test things out whether the whole system works well in software."}, {"start": 245.07999999999998, "end": 250.67999999999998, "text": " However, its usefulness has been demonstrated by deploying it on a real robot arm."}, {"start": 250.67999999999998, "end": 254.04, "text": " You can see the goal written on the screen alongside with the results."}, {"start": 254.04, "end": 259.52, "text": " A really cool piece of work that can potentially open up new ways of thinking about reinforcement"}, {"start": 259.52, "end": 260.52, "text": " learning."}, {"start": 260.52, "end": 265.08, "text": " After all, it's great to have learning algorithms that are so good they can solve problems"}, {"start": 265.08, "end": 269.2, "text": " that we formulate in such a lazy way that we'd have to be fired."}, {"start": 269.2, "end": 270.56, "text": " And here's a quick question."}, {"start": 270.56, "end": 274.15999999999997, "text": " Do you think 8 of these episodes a month is worth a dollar?"}, {"start": 274.15999999999997, "end": 278.4, "text": " If you have enjoyed this episode and your answer is yes, please consider supporting us on"}, {"start": 278.4, "end": 279.4, "text": " Patreon."}, {"start": 279.4, "end": 281.52, "text": " Details are available in the video description."}, {"start": 281.52, "end": 301.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=aR6M0MQBo2w
Latent Space Human Face Synthesis | Two Minute Papers #191
The paper "Optimizing the Latent Space of Generative Networks" is available here: https://arxiv.org/pdf/1707.05776.pdf Khan Academy's video on the Nash equilibrium: https://www.khanacademy.org/economics-finance-domain/microeconomics/nash-equilibrium-tutorial/nash-eq-tutorial/v/prisoners-dilemma-and-nash-equilibrium Earlier episodes showcased in the video: Image Editing with Generative Adversarial Networks - https://www.youtube.com/watch?v=pqkpIfu36Os AI Learns to Synthesize Pictures of Animals - https://www.youtube.com/watch?v=D4C1dB9UheQ AI Makes 3D Models From Photos - https://www.youtube.com/watch?v=HO1LYJb818Q Font paper: http://vecg.cs.ucl.ac.uk/Projects/projects_fonts/projects_fonts.html We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2589641/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizhou naifahir. In many previous episodes, we talked about generative adversarial networks, a recent new line in machine learning research with some absolutely fantastic results in a variety of areas. They can synthesize new images of animals, create 3D models from photos, or dream up new products based on our edits of an image. A generative adversarial network means that we have two neural networks battling each other in an arms race. The generator network tries to create more and more realistic images, and these are passed to the discriminator network, which tries to learn the difference between real photographs and fake forged images. During this process, the two neural networks learn and improve together until they become experts at their own craft. And as you can see, the results are fantastic. However, training these networks against each other is anything that roses and sunshine. We don't know if the process converges, or if we reach Nashak-Willibrium. Nashak-Willibrium is a state where both actors believe they have found an optimal strategy while taking into account the other actors' possible decisions and neither of them have any interest in changing their strategy. This is a classical scenario in game theory, where two convicted criminals are pondering whether they should sneak on each other without knowing how the other decided to act. If you wish to hear more about the Nashak-Willibrium, I've put a link to Khan Academy's video in the description, make sure to check it out, you'll love it. I find it highly exciting that there are parallels in AI and game theory. However, the even cooler thing is that here, we try to build a system where we don't have to deal with such a situation. This is called generative latent optimization, GLO in short, and it is about introducing tricks to do this by only using a generator network. If you have ever read up on font design, you know that it is a highly complex field. However, if we'd like to create a new font type, what we are typically interested in is only a few features, like how curvy they are, or whether we are dealing with a serif kind of font and simple descriptions like that. The same principle can be applied to human faces, animals, and most topics you can imagine. This means that there are many complex concepts that contain a ton of information, most of which can be captured by a simple description with only a few features. This is done by projecting this high-dimensional data onto a low-dimensional latent space. This latent space helps eliminating adversarial optimization, which makes this system much easier to train. And the main selling point is that it still retains the attractive properties of generative adversarial networks. This means that it can synthesize new samples from the Learned data set. If it had learned the concept of birds, it will be able to synthesize new bird species. It can perform continuous interpolation between data points. This means that, for instance, we can produce intermediate states between two chosen furniture types or light fixtures. It is also able to perform simple arithmetic operations between any number of data points. For instance, if A is males with sunglasses, B are males without sunglasses, and C are females, then A minus B plus C is going to generate females in sunglasses. It can also do super-resolution and much, much more. Make sure to have a look at the paper in the video description. Now, before we go, we shall address the elephant in the room. These images are tiny. Our season follow scholars know that for generative adversarial networks, there are plenty of works on how to synthesize high-resolution images with more details. This means that this is a piece of work that opens up exciting new horizons, but it is not to be measured against the tenth follow-up work on top of a more established line of research. Two-minute papers will be here for you to keep you updated on the progress, which is, as we know, staggeringly quick in machine learning research. Don't forget to subscribe and click the bell icon to never miss an episode. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizhou naifahir."}, {"start": 4.32, "end": 8.84, "text": " In many previous episodes, we talked about generative adversarial networks,"}, {"start": 8.84, "end": 15.68, "text": " a recent new line in machine learning research with some absolutely fantastic results in a variety of areas."}, {"start": 15.68, "end": 21.0, "text": " They can synthesize new images of animals, create 3D models from photos,"}, {"start": 21.0, "end": 24.6, "text": " or dream up new products based on our edits of an image."}, {"start": 24.6, "end": 31.12, "text": " A generative adversarial network means that we have two neural networks battling each other in an arms race."}, {"start": 31.12, "end": 35.04, "text": " The generator network tries to create more and more realistic images,"}, {"start": 35.04, "end": 43.400000000000006, "text": " and these are passed to the discriminator network, which tries to learn the difference between real photographs and fake forged images."}, {"start": 43.400000000000006, "end": 50.36, "text": " During this process, the two neural networks learn and improve together until they become experts at their own craft."}, {"start": 50.36, "end": 53.400000000000006, "text": " And as you can see, the results are fantastic."}, {"start": 53.4, "end": 58.519999999999996, "text": " However, training these networks against each other is anything that roses and sunshine."}, {"start": 58.519999999999996, "end": 63.36, "text": " We don't know if the process converges, or if we reach Nashak-Willibrium."}, {"start": 63.36, "end": 69.0, "text": " Nashak-Willibrium is a state where both actors believe they have found an optimal strategy"}, {"start": 69.0, "end": 76.24, "text": " while taking into account the other actors' possible decisions and neither of them have any interest in changing their strategy."}, {"start": 76.24, "end": 81.56, "text": " This is a classical scenario in game theory, where two convicted criminals are pondering"}, {"start": 81.56, "end": 86.28, "text": " whether they should sneak on each other without knowing how the other decided to act."}, {"start": 86.28, "end": 89.04, "text": " If you wish to hear more about the Nashak-Willibrium,"}, {"start": 89.04, "end": 94.12, "text": " I've put a link to Khan Academy's video in the description, make sure to check it out, you'll love it."}, {"start": 94.12, "end": 98.68, "text": " I find it highly exciting that there are parallels in AI and game theory."}, {"start": 98.68, "end": 105.80000000000001, "text": " However, the even cooler thing is that here, we try to build a system where we don't have to deal with such a situation."}, {"start": 105.80000000000001, "end": 109.84, "text": " This is called generative latent optimization, GLO in short,"}, {"start": 109.84, "end": 114.92, "text": " and it is about introducing tricks to do this by only using a generator network."}, {"start": 114.92, "end": 119.92, "text": " If you have ever read up on font design, you know that it is a highly complex field."}, {"start": 119.92, "end": 126.24000000000001, "text": " However, if we'd like to create a new font type, what we are typically interested in is only a few features,"}, {"start": 126.24000000000001, "end": 132.48000000000002, "text": " like how curvy they are, or whether we are dealing with a serif kind of font and simple descriptions like that."}, {"start": 132.48000000000002, "end": 138.16, "text": " The same principle can be applied to human faces, animals, and most topics you can imagine."}, {"start": 138.16, "end": 143.04, "text": " This means that there are many complex concepts that contain a ton of information,"}, {"start": 143.04, "end": 147.76, "text": " most of which can be captured by a simple description with only a few features."}, {"start": 147.76, "end": 153.76, "text": " This is done by projecting this high-dimensional data onto a low-dimensional latent space."}, {"start": 153.76, "end": 160.6, "text": " This latent space helps eliminating adversarial optimization, which makes this system much easier to train."}, {"start": 160.6, "end": 167.07999999999998, "text": " And the main selling point is that it still retains the attractive properties of generative adversarial networks."}, {"start": 167.08, "end": 171.48000000000002, "text": " This means that it can synthesize new samples from the Learned data set."}, {"start": 171.48000000000002, "end": 179.32000000000002, "text": " If it had learned the concept of birds, it will be able to synthesize new bird species."}, {"start": 179.32000000000002, "end": 183.32000000000002, "text": " It can perform continuous interpolation between data points."}, {"start": 183.32000000000002, "end": 191.24, "text": " This means that, for instance, we can produce intermediate states between two chosen furniture types or light fixtures."}, {"start": 191.24, "end": 197.28, "text": " It is also able to perform simple arithmetic operations between any number of data points."}, {"start": 197.28, "end": 205.24, "text": " For instance, if A is males with sunglasses, B are males without sunglasses, and C are females,"}, {"start": 205.24, "end": 210.52, "text": " then A minus B plus C is going to generate females in sunglasses."}, {"start": 210.52, "end": 213.96, "text": " It can also do super-resolution and much, much more."}, {"start": 213.96, "end": 216.88, "text": " Make sure to have a look at the paper in the video description."}, {"start": 216.88, "end": 220.32000000000002, "text": " Now, before we go, we shall address the elephant in the room."}, {"start": 220.32, "end": 222.16, "text": " These images are tiny."}, {"start": 222.16, "end": 231.76, "text": " Our season follow scholars know that for generative adversarial networks, there are plenty of works on how to synthesize high-resolution images with more details."}, {"start": 231.76, "end": 236.51999999999998, "text": " This means that this is a piece of work that opens up exciting new horizons,"}, {"start": 236.51999999999998, "end": 242.68, "text": " but it is not to be measured against the tenth follow-up work on top of a more established line of research."}, {"start": 242.68, "end": 250.64000000000001, "text": " Two-minute papers will be here for you to keep you updated on the progress, which is, as we know, staggeringly quick in machine learning research."}, {"start": 250.64000000000001, "end": 254.76000000000002, "text": " Don't forget to subscribe and click the bell icon to never miss an episode."}, {"start": 254.76, "end": 276.84, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=14zkfDTN_qo
DeepMind's AI Learns Locomotion From Scratch | Two Minute Papers #190
The paper "Emergence of Locomotion Behaviours in Rich Environments" is available here: https://arxiv.org/abs/1707.02286 Our Patreon page with the details is available here: https://www.patreon.com/TwoMinutePapers Recommended for you: Digital Creatures Learn To Walk - https://www.youtube.com/watch?v=kQ2bqz3HPJE Real-Time Character Control With Phase-Functioned Neural Networks - https://www.youtube.com/watch?v=wlndIQHtiFw We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1834465/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. We have talked about some awesome previous works where we use learning algorithms to teach digital creatures to navigate in complex environments. The input is a terrain and a set of joints, feet and movement types, and the output has to be a series of motions that maximizes some kind of reward. This previous technique borrowed smaller snippets of movements, from a previously existing database of motions and learned to stitch them together in a way that looks natural. And as you can see, these results are phenomenal. And a selling point of this new one, which you might say looks less elaborate. However, it synthesizes them from scratch. The problem is typically solved via reinforcement learning, which is a technique that comes up with a series of decisions to maximize a prescribed score. This score typically needs to be something reasonably complex. Otherwise, the algorithm is given too much freedom to maximize it. For instance, we may want to teach a digital character to run or jump hurdles, but it may start crawling instead, which is still completely fine if our objective is too simple. For instance, just maximizing the distance from the starting point. To alleviate this, we typically resort to reward engineering, which means that we add additional terms to this reward function to regularize the behavior of these creatures. For instance, we can specify that throughout these motions, the body has to remain upright, which likely favors locomotion type solutions. However, one of the main advantages of machine learning is that we can reuse our solutions for a large set of problems. If we have to specialize our algorithm for all terrain and motion types and different kinds of games, we lose out on the biggest advantage of learning techniques. So researchers at DeepMind decided that they are going to solve this problem with a reward function, which is nothing else but forward progress. That's it. The further we get, the higher score we obtain. This is amazing because it doesn't require any specialized reward function, but at the same time, there are a ton of different solutions that get us far in these terrains. And as you see here, beyond bipeds, a bunch of different agent types are supported. The key factors to make this happen is to apply two modifications to the original reinforcement learning algorithm. One makes the learning process more robust and less dependent on what parameters we choose and the other one makes it more scalable, which means that it is able to efficiently deal with larger problems. Furthermore, the training process itself happens on a rich, carefully selected of challenging levels. Make sure to have a look at the paper for details. A byproduct of this kind of problem formulation is, as you can see, that even though this humanoid does its job with a slower body well, but in the meantime, it is flailing its arms like a madman. The reason is likely because there is not much of a difference in the reward between different arm motions. This means that we most likely get through a maze or a height field even when flailing. Therefore, the algorithm doesn't have any reason to favor more natural looking movements for the upper body. It will probably choose a random one, which is highly unlikely to be a natural motion. This creates high quality, albeit amusing results that I am sure some residents of the internet will honor with a sped up remix video with some Benihil music. In summary, no pre-computed motion database, no handcrafting of rewards, and no additional wizardry needed. Everything is learned from scratch with a few small modifications to the reinforcement learning algorithm. Highly remarkable work. If you've enjoyed this episode and would like to help us and support the series, have a look at our Patreon page. Details and cool perks are available in the video description, or just click the letter P at the end of this video. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.48, "end": 9.48, "text": " We have talked about some awesome previous works where we use learning algorithms to teach"}, {"start": 9.48, "end": 13.6, "text": " digital creatures to navigate in complex environments."}, {"start": 13.6, "end": 19.04, "text": " The input is a terrain and a set of joints, feet and movement types, and the output has to"}, {"start": 19.04, "end": 23.2, "text": " be a series of motions that maximizes some kind of reward."}, {"start": 23.2, "end": 27.92, "text": " This previous technique borrowed smaller snippets of movements, from a previously existing"}, {"start": 27.92, "end": 33.56, "text": " database of motions and learned to stitch them together in a way that looks natural."}, {"start": 33.56, "end": 36.64, "text": " And as you can see, these results are phenomenal."}, {"start": 36.64, "end": 41.040000000000006, "text": " And a selling point of this new one, which you might say looks less elaborate."}, {"start": 41.040000000000006, "end": 44.120000000000005, "text": " However, it synthesizes them from scratch."}, {"start": 44.120000000000005, "end": 48.96, "text": " The problem is typically solved via reinforcement learning, which is a technique that comes up"}, {"start": 48.96, "end": 52.84, "text": " with a series of decisions to maximize a prescribed score."}, {"start": 52.84, "end": 56.24, "text": " This score typically needs to be something reasonably complex."}, {"start": 56.24, "end": 60.28, "text": " Otherwise, the algorithm is given too much freedom to maximize it."}, {"start": 60.28, "end": 65.52, "text": " For instance, we may want to teach a digital character to run or jump hurdles, but it may"}, {"start": 65.52, "end": 71.24000000000001, "text": " start crawling instead, which is still completely fine if our objective is too simple."}, {"start": 71.24000000000001, "end": 75.0, "text": " For instance, just maximizing the distance from the starting point."}, {"start": 75.0, "end": 80.68, "text": " To alleviate this, we typically resort to reward engineering, which means that we add additional"}, {"start": 80.68, "end": 85.6, "text": " terms to this reward function to regularize the behavior of these creatures."}, {"start": 85.6, "end": 90.75999999999999, "text": " For instance, we can specify that throughout these motions, the body has to remain upright,"}, {"start": 90.75999999999999, "end": 93.72, "text": " which likely favors locomotion type solutions."}, {"start": 93.72, "end": 98.83999999999999, "text": " However, one of the main advantages of machine learning is that we can reuse our solutions"}, {"start": 98.83999999999999, "end": 100.96, "text": " for a large set of problems."}, {"start": 100.96, "end": 105.24, "text": " If we have to specialize our algorithm for all terrain and motion types and different"}, {"start": 105.24, "end": 109.96, "text": " kinds of games, we lose out on the biggest advantage of learning techniques."}, {"start": 109.96, "end": 114.91999999999999, "text": " So researchers at DeepMind decided that they are going to solve this problem with a reward"}, {"start": 114.92, "end": 118.56, "text": " function, which is nothing else but forward progress."}, {"start": 118.56, "end": 119.56, "text": " That's it."}, {"start": 119.56, "end": 122.28, "text": " The further we get, the higher score we obtain."}, {"start": 122.28, "end": 127.12, "text": " This is amazing because it doesn't require any specialized reward function, but at the"}, {"start": 127.12, "end": 132.2, "text": " same time, there are a ton of different solutions that get us far in these terrains."}, {"start": 132.2, "end": 137.12, "text": " And as you see here, beyond bipeds, a bunch of different agent types are supported."}, {"start": 137.12, "end": 142.48000000000002, "text": " The key factors to make this happen is to apply two modifications to the original reinforcement"}, {"start": 142.48000000000002, "end": 143.96, "text": " learning algorithm."}, {"start": 143.96, "end": 149.20000000000002, "text": " One makes the learning process more robust and less dependent on what parameters we choose"}, {"start": 149.20000000000002, "end": 154.36, "text": " and the other one makes it more scalable, which means that it is able to efficiently deal"}, {"start": 154.36, "end": 156.04000000000002, "text": " with larger problems."}, {"start": 156.04000000000002, "end": 161.52, "text": " Furthermore, the training process itself happens on a rich, carefully selected of challenging"}, {"start": 161.52, "end": 162.52, "text": " levels."}, {"start": 162.52, "end": 164.76000000000002, "text": " Make sure to have a look at the paper for details."}, {"start": 164.76000000000002, "end": 170.52, "text": " A byproduct of this kind of problem formulation is, as you can see, that even though this humanoid"}, {"start": 170.52, "end": 176.04000000000002, "text": " does its job with a slower body well, but in the meantime, it is flailing its arms like"}, {"start": 176.04000000000002, "end": 177.04000000000002, "text": " a madman."}, {"start": 177.04000000000002, "end": 181.72, "text": " The reason is likely because there is not much of a difference in the reward between different"}, {"start": 181.72, "end": 182.88, "text": " arm motions."}, {"start": 182.88, "end": 187.76000000000002, "text": " This means that we most likely get through a maze or a height field even when flailing."}, {"start": 187.76000000000002, "end": 192.64000000000001, "text": " Therefore, the algorithm doesn't have any reason to favor more natural looking movements"}, {"start": 192.64000000000001, "end": 194.12, "text": " for the upper body."}, {"start": 194.12, "end": 198.92000000000002, "text": " It will probably choose a random one, which is highly unlikely to be a natural motion."}, {"start": 198.92, "end": 204.83999999999997, "text": " This creates high quality, albeit amusing results that I am sure some residents of the internet"}, {"start": 204.83999999999997, "end": 209.07999999999998, "text": " will honor with a sped up remix video with some Benihil music."}, {"start": 209.07999999999998, "end": 214.76, "text": " In summary, no pre-computed motion database, no handcrafting of rewards, and no additional"}, {"start": 214.76, "end": 216.39999999999998, "text": " wizardry needed."}, {"start": 216.39999999999998, "end": 220.48, "text": " Everything is learned from scratch with a few small modifications to the reinforcement"}, {"start": 220.48, "end": 221.79999999999998, "text": " learning algorithm."}, {"start": 221.79999999999998, "end": 223.48, "text": " Highly remarkable work."}, {"start": 223.48, "end": 227.51999999999998, "text": " If you've enjoyed this episode and would like to help us and support the series, have"}, {"start": 227.52, "end": 229.20000000000002, "text": " a look at our Patreon page."}, {"start": 229.20000000000002, "end": 234.08, "text": " Details and cool perks are available in the video description, or just click the letter"}, {"start": 234.08, "end": 236.08, "text": " P at the end of this video."}, {"start": 236.08, "end": 257.8, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=TItYXBoJ1sc
What is The Best Way To Simulate Liquids? | Two Minute Papers #189
The paper "Perceptual Evaluation of Liquid Simulation Methods" is available here: https://ge.in.tum.de/publications/2017-sig-um/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2753740/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejone Fahir. As you know from this series, fluid simulation techniques that are able to create high-fidelity video footage are in abundance in computer graphics research. These techniques all have their own trade-offs, and when we evaluate them, we often use terms like first or second-order accuracy, which are mathematical terms. We often have to evaluate these techniques against each other by means of mathematics, because this way we can set up consistent and unbiased comparisons that everyone understands and agrees upon. However, ultimately, in the show business, what matters is how the viewers perceive the end result, whether they think it looks fake, or if it keeps their suspension of this belief. We have the choice of not only simulation techniques, but all of them also have their own set of parameters. For instance, the higher the resolution of our simulations, the more high-frequency details appear in the footage. However, after a point, increasing the resolution further is extremely costly, and while we know what is to be gained in terms of mathematics, it is still unknown how well it would do with the users. So the ultimate question is this, what do I get for my money and time? This paper provides an exhaustive user study to answer this question, where the users are asked to look at two different simulations and as a binary choice, tell us which is the one they perceived to be closer to the reference. The reference footage is a real-world video of a water sloshing in a tank and the other footage that is to be judged is created via a fluid simulation algorithm. Turns out that the reference footage can be almost anything, as long as there are some splashing and sloshing going on in it. It also turns out that after a relatively favorable breaking point, which is denoted by 2x, further increasing the resolution does not make a significant difference in the user's course. But boy, does it change the computation times? So this is why such studies are super useful, and it's great to see that the accuracy of these techniques are measured both mathematically and also how convincing they actually look for users. Another curious finding is that if we deny access to the reference footage, we see a large change in different responses and a similar jump in ambiguity. This means that we are reasonably bad at predicting the fine details. Therefore if the simulation pushes the right buttons, the users will easily believe it to be correct even if it is far away from the ground truth solution. Here's a matrix with a ton of rendered footage. Horizontally you see the same thing with different simulation techniques and vertically we slowly go from transparent above to opaque below. To keep things fair and really reveal which choices are the best bank for the buck, there are also comparisons between techniques that have a similar computation time. In these cases the Floyd implicit particle, flip in short and the affine particle in cell are almost unanimously favorable. These are advanced techniques that combine particle and grid-based simulations. I think this is highly useful information for more time critical applications, so make sure to have a look at the paper for details. There are similar user studies with glossy and translucent material models and much more in the paper. The source code of this project is also available. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejone Fahir."}, {"start": 4.64, "end": 9.68, "text": " As you know from this series, fluid simulation techniques that are able to create high-fidelity"}, {"start": 9.68, "end": 13.52, "text": " video footage are in abundance in computer graphics research."}, {"start": 13.52, "end": 18.56, "text": " These techniques all have their own trade-offs, and when we evaluate them, we often use terms"}, {"start": 18.56, "end": 23.12, "text": " like first or second-order accuracy, which are mathematical terms."}, {"start": 23.12, "end": 27.68, "text": " We often have to evaluate these techniques against each other by means of mathematics, because"}, {"start": 27.68, "end": 33.2, "text": " this way we can set up consistent and unbiased comparisons that everyone understands and"}, {"start": 33.2, "end": 34.2, "text": " agrees upon."}, {"start": 34.2, "end": 39.44, "text": " However, ultimately, in the show business, what matters is how the viewers perceive the"}, {"start": 39.44, "end": 45.2, "text": " end result, whether they think it looks fake, or if it keeps their suspension of this belief."}, {"start": 45.2, "end": 50.32, "text": " We have the choice of not only simulation techniques, but all of them also have their own"}, {"start": 50.32, "end": 51.68, "text": " set of parameters."}, {"start": 51.68, "end": 56.64, "text": " For instance, the higher the resolution of our simulations, the more high-frequency details"}, {"start": 56.64, "end": 58.24, "text": " appear in the footage."}, {"start": 58.24, "end": 63.84, "text": " However, after a point, increasing the resolution further is extremely costly, and while we"}, {"start": 63.84, "end": 69.12, "text": " know what is to be gained in terms of mathematics, it is still unknown how well it would do with"}, {"start": 69.12, "end": 70.12, "text": " the users."}, {"start": 70.12, "end": 74.96000000000001, "text": " So the ultimate question is this, what do I get for my money and time?"}, {"start": 74.96000000000001, "end": 79.72, "text": " This paper provides an exhaustive user study to answer this question, where the users are"}, {"start": 79.72, "end": 84.68, "text": " asked to look at two different simulations and as a binary choice, tell us which is the"}, {"start": 84.68, "end": 87.80000000000001, "text": " one they perceived to be closer to the reference."}, {"start": 87.80000000000001, "end": 92.68, "text": " The reference footage is a real-world video of a water sloshing in a tank and the other"}, {"start": 92.68, "end": 97.72000000000001, "text": " footage that is to be judged is created via a fluid simulation algorithm."}, {"start": 97.72000000000001, "end": 102.32000000000001, "text": " Turns out that the reference footage can be almost anything, as long as there are some"}, {"start": 102.32000000000001, "end": 104.92000000000002, "text": " splashing and sloshing going on in it."}, {"start": 104.92000000000002, "end": 111.28, "text": " It also turns out that after a relatively favorable breaking point, which is denoted by 2x, further"}, {"start": 111.28, "end": 116.16, "text": " increasing the resolution does not make a significant difference in the user's course."}, {"start": 116.16, "end": 119.08, "text": " But boy, does it change the computation times?"}, {"start": 119.08, "end": 123.92, "text": " So this is why such studies are super useful, and it's great to see that the accuracy"}, {"start": 123.92, "end": 129.48, "text": " of these techniques are measured both mathematically and also how convincing they actually look"}, {"start": 129.48, "end": 130.84, "text": " for users."}, {"start": 130.84, "end": 135.68, "text": " Another curious finding is that if we deny access to the reference footage, we see a large"}, {"start": 135.68, "end": 140.0, "text": " change in different responses and a similar jump in ambiguity."}, {"start": 140.0, "end": 144.28, "text": " This means that we are reasonably bad at predicting the fine details."}, {"start": 144.28, "end": 148.92, "text": " Therefore if the simulation pushes the right buttons, the users will easily believe"}, {"start": 148.92, "end": 153.4, "text": " it to be correct even if it is far away from the ground truth solution."}, {"start": 153.4, "end": 156.28, "text": " Here's a matrix with a ton of rendered footage."}, {"start": 156.28, "end": 161.36, "text": " Horizontally you see the same thing with different simulation techniques and vertically"}, {"start": 161.36, "end": 165.2, "text": " we slowly go from transparent above to opaque below."}, {"start": 165.2, "end": 170.23999999999998, "text": " To keep things fair and really reveal which choices are the best bank for the buck, there"}, {"start": 170.23999999999998, "end": 175.07999999999998, "text": " are also comparisons between techniques that have a similar computation time."}, {"start": 175.07999999999998, "end": 181.04, "text": " In these cases the Floyd implicit particle, flip in short and the affine particle in cell"}, {"start": 181.04, "end": 183.6, "text": " are almost unanimously favorable."}, {"start": 183.6, "end": 187.95999999999998, "text": " These are advanced techniques that combine particle and grid-based simulations."}, {"start": 187.95999999999998, "end": 192.76, "text": " I think this is highly useful information for more time critical applications, so make"}, {"start": 192.76, "end": 195.32, "text": " sure to have a look at the paper for details."}, {"start": 195.32, "end": 200.07999999999998, "text": " There are similar user studies with glossy and translucent material models and much more"}, {"start": 200.07999999999998, "end": 201.07999999999998, "text": " in the paper."}, {"start": 201.07999999999998, "end": 203.51999999999998, "text": " The source code of this project is also available."}, {"start": 203.52, "end": 223.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Mu0ew2F-SSA
AI Learns To Improve Smoke Simulations | Two Minute Papers #188
The paper "Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors" is available here: https://ge.in.tum.de/publications/2017-sig-chu/ Recommended for you: Wavelet Turbulence - https://www.youtube.com/watch?v=5xLSbj5SsSE Neural Network Learns The Physics of Fluids and Smoke - https://www.youtube.com/watch?v=iOWamCtnwTc We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2571245/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. This work is about using AI to create super detailed smoke simulations. Typically, creating a crude simulation doesn't take very long, but as we increase the resolution, the execution time and memory consumption skyrockets. In the age of AI, it only sounds logical to try to include some learning algorithms in this process. So what if we had an AI-based technique that would have some sort of understanding of smoke simulations, take our crude data and add the fine details to it. This way, we could obtain a high resolution smoke simulation without waiting several days or weeks for the computation. Now if you're a seasoned Fellow scholar, you may remember an earlier work by the name Wavelet Turbulence, which is one of my favorite papers of all time. So much so that it got the distinction of being showcased in the very first two-minute papers episode. I was a sophomore college student back then when I first seen it and was absolutely shocked by the quality of the results. That was an experience I'll never forget. It also won a technical Oscar award and it is not an overstatement to say that this was one of the most influential works that made me realize that research is my true calling. The link to the first episode is available in the video description and if you want to see how embarrassing it is, make sure to check it out. It did something similar, but instead of using AI, it used some heuristics that describe what is the ratio and distribution of smaller and bigger vertices in a piece of fluid or smoke. Using this information, it could create a somewhat similar effect, but ultimately that technique had an understanding of smoke simulations in general. And it didn't know anything about the scene that we have at hand right now. Another work that is related to this is showing a bunch of smoke simulation videos to an AI and teach it how to continue these simulations by itself. I would place this work as a middle ground solution because this work says that we should take a step back and not try to synthesize everything from scratch. Let's create a database of simulations, dice them up into tiny, tiny patches, look at the same footage in low and high resolutions and learn how they relate to each other. This way we can hand the neural network some low resolution footage and it will be able to make an educated guess as to which high resolution patch should be the best match for it. When we found the right patch, we just switched the core simulation to the most fitting high resolution patch in the database. You might say that in theory, creating such a Frankenstein smoke simulation sounds like a dreadful idea. But have a look at the results as they are absolutely brilliant. And as you can see, it takes a really crude base simulation and adds so many details to it, it's truly an incredible achievement. One neural network is trained to capture similarities in densities and one for vorticity. Using the two neural networks in tandem, we can take a low resolution fluid flow and synthesize the fine details on top of it in a way that is hardly believable. It also handles boundary conditions, which means that these details are correctly added even if our smoke puff hits an object. This was an issue with wavelet turbulence which had to be addressed with several follow-up works. There are also comparisons against this legendary algorithm and as you can see, the new techniques smokes it. However, it took 9 years to do this. This is exactly 9 eternities in the world of research, which is a huge testament to how powerful the original algorithm was. It is also really cool to get more and more messages where I get to know more about you fellow scholars. I was informed that the series is used in school classes in Brazil. It is also used to augment college education and it is a great topic for fun family conversations over dinner. That's just absolutely fantastic. Loving the fact that the series is an inspiration for many of you. Thanks for watching and for your generous support. See you next Thursday atríol slowly intro.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.5600000000000005, "end": 9.56, "text": " This work is about using AI to create super detailed smoke simulations."}, {"start": 9.56, "end": 15.040000000000001, "text": " Typically, creating a crude simulation doesn't take very long, but as we increase the resolution,"}, {"start": 15.040000000000001, "end": 18.56, "text": " the execution time and memory consumption skyrockets."}, {"start": 18.56, "end": 23.76, "text": " In the age of AI, it only sounds logical to try to include some learning algorithms in"}, {"start": 23.76, "end": 24.92, "text": " this process."}, {"start": 24.92, "end": 30.080000000000002, "text": " So what if we had an AI-based technique that would have some sort of understanding of smoke"}, {"start": 30.080000000000002, "end": 34.84, "text": " simulations, take our crude data and add the fine details to it."}, {"start": 34.84, "end": 39.32, "text": " This way, we could obtain a high resolution smoke simulation without waiting several"}, {"start": 39.32, "end": 41.96, "text": " days or weeks for the computation."}, {"start": 41.96, "end": 46.64, "text": " Now if you're a seasoned Fellow scholar, you may remember an earlier work by the name"}, {"start": 46.64, "end": 51.44, "text": " Wavelet Turbulence, which is one of my favorite papers of all time."}, {"start": 51.44, "end": 56.199999999999996, "text": " So much so that it got the distinction of being showcased in the very first two-minute"}, {"start": 56.199999999999996, "end": 57.48, "text": " papers episode."}, {"start": 57.48, "end": 62.839999999999996, "text": " I was a sophomore college student back then when I first seen it and was absolutely shocked"}, {"start": 62.839999999999996, "end": 64.75999999999999, "text": " by the quality of the results."}, {"start": 64.75999999999999, "end": 67.24, "text": " That was an experience I'll never forget."}, {"start": 67.24, "end": 72.0, "text": " It also won a technical Oscar award and it is not an overstatement to say that this was"}, {"start": 72.0, "end": 77.52, "text": " one of the most influential works that made me realize that research is my true calling."}, {"start": 77.52, "end": 81.6, "text": " The link to the first episode is available in the video description and if you want to"}, {"start": 81.6, "end": 84.92, "text": " see how embarrassing it is, make sure to check it out."}, {"start": 84.92, "end": 90.28, "text": " It did something similar, but instead of using AI, it used some heuristics that describe"}, {"start": 90.28, "end": 96.92, "text": " what is the ratio and distribution of smaller and bigger vertices in a piece of fluid or smoke."}, {"start": 96.92, "end": 101.47999999999999, "text": " Using this information, it could create a somewhat similar effect, but ultimately that"}, {"start": 101.47999999999999, "end": 105.12, "text": " technique had an understanding of smoke simulations in general."}, {"start": 105.12, "end": 109.64, "text": " And it didn't know anything about the scene that we have at hand right now."}, {"start": 109.64, "end": 114.0, "text": " Another work that is related to this is showing a bunch of smoke simulation videos to an"}, {"start": 114.0, "end": 118.28, "text": " AI and teach it how to continue these simulations by itself."}, {"start": 118.28, "end": 122.72, "text": " I would place this work as a middle ground solution because this work says that we should"}, {"start": 122.72, "end": 127.24000000000001, "text": " take a step back and not try to synthesize everything from scratch."}, {"start": 127.24000000000001, "end": 132.6, "text": " Let's create a database of simulations, dice them up into tiny, tiny patches, look"}, {"start": 132.6, "end": 138.68, "text": " at the same footage in low and high resolutions and learn how they relate to each other."}, {"start": 138.68, "end": 143.24, "text": " This way we can hand the neural network some low resolution footage and it will be able"}, {"start": 143.24, "end": 148.07999999999998, "text": " to make an educated guess as to which high resolution patch should be the best match"}, {"start": 148.07999999999998, "end": 149.07999999999998, "text": " for it."}, {"start": 149.07999999999998, "end": 153.04, "text": " When we found the right patch, we just switched the core simulation to the most fitting"}, {"start": 153.04, "end": 155.56, "text": " high resolution patch in the database."}, {"start": 155.56, "end": 160.56, "text": " You might say that in theory, creating such a Frankenstein smoke simulation sounds like"}, {"start": 160.56, "end": 161.84, "text": " a dreadful idea."}, {"start": 161.84, "end": 165.52, "text": " But have a look at the results as they are absolutely brilliant."}, {"start": 165.52, "end": 170.52, "text": " And as you can see, it takes a really crude base simulation and adds so many details to"}, {"start": 170.52, "end": 173.52, "text": " it, it's truly an incredible achievement."}, {"start": 173.52, "end": 179.56, "text": " One neural network is trained to capture similarities in densities and one for vorticity."}, {"start": 179.56, "end": 184.56, "text": " Using the two neural networks in tandem, we can take a low resolution fluid flow and synthesize"}, {"start": 184.56, "end": 188.68, "text": " the fine details on top of it in a way that is hardly believable."}, {"start": 188.68, "end": 193.56, "text": " It also handles boundary conditions, which means that these details are correctly added"}, {"start": 193.56, "end": 196.52, "text": " even if our smoke puff hits an object."}, {"start": 196.52, "end": 200.92000000000002, "text": " This was an issue with wavelet turbulence which had to be addressed with several follow-up"}, {"start": 200.92000000000002, "end": 201.92000000000002, "text": " works."}, {"start": 201.92000000000002, "end": 206.96, "text": " There are also comparisons against this legendary algorithm and as you can see, the new techniques"}, {"start": 206.96, "end": 207.96, "text": " smokes it."}, {"start": 207.96, "end": 210.72, "text": " However, it took 9 years to do this."}, {"start": 210.72, "end": 215.56, "text": " This is exactly 9 eternities in the world of research, which is a huge testament to how"}, {"start": 215.56, "end": 218.24, "text": " powerful the original algorithm was."}, {"start": 218.24, "end": 222.32000000000002, "text": " It is also really cool to get more and more messages where I get to know more about"}, {"start": 222.32000000000002, "end": 223.60000000000002, "text": " you fellow scholars."}, {"start": 223.60000000000002, "end": 227.96, "text": " I was informed that the series is used in school classes in Brazil."}, {"start": 227.96, "end": 233.56, "text": " It is also used to augment college education and it is a great topic for fun family conversations"}, {"start": 233.56, "end": 234.56, "text": " over dinner."}, {"start": 234.56, "end": 236.88, "text": " That's just absolutely fantastic."}, {"start": 236.88, "end": 240.52, "text": " Loving the fact that the series is an inspiration for many of you."}, {"start": 240.52, "end": 251.16000000000003, "text": " Thanks for watching and for your generous support."}, {"start": 251.16, "end": 279.64, "text": " See you next Thursday atr\u00edol slowly intro."}]
Two Minute Papers
https://www.youtube.com/watch?v=bVGubOt_jLI
Physics-based Image and Video Editing | Two Minute Papers #187
The paper "Calipso: Physics-based Image and Video Editing through CAD Model Proxies" is available here: https://arxiv.org/abs/1708.03748 Project page: http://mimesis.inria.fr/calipso/ Physics simulation by SOFA: http://www.sofa-framework.org Recommended for you: https://www.youtube.com/watch?v=BjwhMDhbqAs We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2714673/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efeir. This work is about changing already existing images and videos by adding new objects to them or editing their physical attributes. These editable attributes include gravity, mass, stiffness, or we can even add new physical forces to the scene. For instance, we can change the stiffness of the watches in this dali painting and create an animation from it. Physically accurate animations from paintings. How cool is that? This is approaching science fiction levels of craziness. Animating a stationary clothesline by adding a virtual wind effect to the scene or bending a bridge by changing its mass is also a possibility. The first reaction I had when I've looked at this work was, are you kidding me? You can't edit a photograph, especially that I've seen plenty of earlier works that try to do something similar, but each time the limitations were just too crippling for real-world usage. And the ultimate question is always, how much user interaction does this need? Is this trivial to use or is it a laborious process? What we need to do is roughly highlight the outline of the object that we'd like to manipulate. The algorithm uses a previously published technique to make sure that the outlines are accurately captured and then tries to create a 3D digital model from the selected area. We need one more step where we align the 3D model to the image or video input. Finally, the attribute changes and edits take place not on the video footage but on this 3D model through a physics simulation technique. A truly refreshing combination of old and new techniques with some killer applications, loving it. The biggest challenge is to make sure that the geometry and the visual consistency of the scene is preserved through these changes. There are plenty of details discussed in the paper, make sure to have a look at that, the link to it is available in the video description. As these 2D photo to 3D model generator algorithms improve, so will the quality of these editing techniques in the near future. Our previous episode was on this topic, make sure to have a look at that. Also, if you would like to get more updates on the newest and coolest works in this rapidly improving field, make sure to subscribe and click the bell icon to be notified when new 2 minute papers videos come up. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efeir."}, {"start": 4.28, "end": 8.2, "text": " This work is about changing already existing images and videos"}, {"start": 8.2, "end": 12.200000000000001, "text": " by adding new objects to them or editing their physical attributes."}, {"start": 12.200000000000001, "end": 16.92, "text": " These editable attributes include gravity, mass, stiffness,"}, {"start": 16.92, "end": 19.96, "text": " or we can even add new physical forces to the scene."}, {"start": 19.96, "end": 24.2, "text": " For instance, we can change the stiffness of the watches in this dali painting"}, {"start": 24.2, "end": 26.16, "text": " and create an animation from it."}, {"start": 26.16, "end": 29.16, "text": " Physically accurate animations from paintings."}, {"start": 29.16, "end": 30.76, "text": " How cool is that?"}, {"start": 30.76, "end": 34.2, "text": " This is approaching science fiction levels of craziness."}, {"start": 34.2, "end": 38.8, "text": " Animating a stationary clothesline by adding a virtual wind effect to the scene"}, {"start": 38.8, "end": 43.08, "text": " or bending a bridge by changing its mass is also a possibility."}, {"start": 43.08, "end": 46.16, "text": " The first reaction I had when I've looked at this work was,"}, {"start": 46.16, "end": 47.519999999999996, "text": " are you kidding me?"}, {"start": 47.519999999999996, "end": 49.32, "text": " You can't edit a photograph,"}, {"start": 49.32, "end": 52.08, "text": " especially that I've seen plenty of earlier works"}, {"start": 52.08, "end": 54.0, "text": " that try to do something similar,"}, {"start": 54.0, "end": 58.56, "text": " but each time the limitations were just too crippling for real-world usage."}, {"start": 58.56, "end": 60.480000000000004, "text": " And the ultimate question is always,"}, {"start": 60.480000000000004, "end": 62.88, "text": " how much user interaction does this need?"}, {"start": 62.88, "end": 66.60000000000001, "text": " Is this trivial to use or is it a laborious process?"}, {"start": 66.60000000000001, "end": 70.2, "text": " What we need to do is roughly highlight the outline of the object"}, {"start": 70.2, "end": 71.60000000000001, "text": " that we'd like to manipulate."}, {"start": 71.60000000000001, "end": 74.52000000000001, "text": " The algorithm uses a previously published technique"}, {"start": 74.52000000000001, "end": 77.6, "text": " to make sure that the outlines are accurately captured"}, {"start": 77.6, "end": 81.84, "text": " and then tries to create a 3D digital model from the selected area."}, {"start": 81.84, "end": 87.32000000000001, "text": " We need one more step where we align the 3D model to the image or video input."}, {"start": 87.32, "end": 91.88, "text": " Finally, the attribute changes and edits take place not on the video footage"}, {"start": 91.88, "end": 95.24, "text": " but on this 3D model through a physics simulation technique."}, {"start": 95.24, "end": 98.75999999999999, "text": " A truly refreshing combination of old and new techniques"}, {"start": 98.75999999999999, "end": 101.6, "text": " with some killer applications, loving it."}, {"start": 101.6, "end": 104.44, "text": " The biggest challenge is to make sure that the geometry"}, {"start": 104.44, "end": 108.56, "text": " and the visual consistency of the scene is preserved through these changes."}, {"start": 108.56, "end": 111.0, "text": " There are plenty of details discussed in the paper,"}, {"start": 111.0, "end": 112.52, "text": " make sure to have a look at that,"}, {"start": 112.52, "end": 115.19999999999999, "text": " the link to it is available in the video description."}, {"start": 115.2, "end": 119.60000000000001, "text": " As these 2D photo to 3D model generator algorithms improve,"}, {"start": 119.60000000000001, "end": 123.28, "text": " so will the quality of these editing techniques in the near future."}, {"start": 123.28, "end": 126.96000000000001, "text": " Our previous episode was on this topic, make sure to have a look at that."}, {"start": 126.96000000000001, "end": 130.96, "text": " Also, if you would like to get more updates on the newest and coolest works"}, {"start": 130.96, "end": 134.08, "text": " in this rapidly improving field, make sure to subscribe"}, {"start": 134.08, "end": 138.48000000000002, "text": " and click the bell icon to be notified when new 2 minute papers videos come up."}, {"start": 138.48, "end": 148.48, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=BjwhMDhbqAs
AI Creates 3D Models From Images | Two Minute Papers #186
The paper "Hierarchical Surface Prediction for 3D Object Reconstruction" is available here: https://arxiv.org/abs/1704.00710 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Brian Gilman, Dave Rushton-Smith, Dennis Abts, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2717506/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. Today, we are going to talk about a task that humans are remarkably good at, but learning algorithms mostly flounder, and that is creating 3D geometry by looking at a 2D color image. In video games and animation films, this is a scenario that comes up very often. If we need a new weapon model in the game, we typically give the artist a photo, who will sit down with a 3D modeler program and spend a few hours sculpting a similar 3D geometry. And I will quickly note that our binocular vision is not entirely necessary to make this happen. We can look at 2D images all day long and still have a good idea about the shape of an airplane, even with one eye closed. We had previous episodes on this problem, and the verdict was that the results with previous techniques are great, but not very detailed. Some mathematicians like to say that this algorithm has a cubic complexity, or cubic scaling, which means that if we wish to increase the resolution of the 3D model, just a tiny bit. We have to wait not a tiny bit longer, but significantly longer. And the cubic part means that this trade-off becomes unbearable, even for moderately high resolutions. This paper offers a technique to break through this limitation. This new technique still uses a learning algorithm to predict the geometry, but it creates these 3D models hierarchically. This means that it starts out approximating the coarse geometry of the output and restarts the process by adding more and more fine details to it. The geometry becomes more and more refined over several steps. Now this refinement doesn't just work unless we have a carefully designed algorithm around it. The refinement happens by using additional information in each step from the created model. Namely, we imagine are predicted 3D geometry as a collection of small blocks, and each block is classified as either free space, occupied space, or as a surface. After this classification happened, we have the possibility to focus our efforts on refining the surface of the model, leading to a significant improvement in the execution time of the algorithm. As a result, we get 3D models that are of higher quality than the ones offered by previous techniques. The outputs are still not super high resolution, but they capture a fair number of surface detail. And you know the drill, research is a process, and every paper is a stepping stone. And this is one of those stepping stones that can potentially save many hours of work for 3D artists in the industry. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.64, "end": 10.200000000000001, "text": " Today, we are going to talk about a task that humans are remarkably good at, but learning"}, {"start": 10.200000000000001, "end": 17.52, "text": " algorithms mostly flounder, and that is creating 3D geometry by looking at a 2D color image."}, {"start": 17.52, "end": 22.48, "text": " In video games and animation films, this is a scenario that comes up very often."}, {"start": 22.48, "end": 26.84, "text": " If we need a new weapon model in the game, we typically give the artist a photo, who will"}, {"start": 26.84, "end": 33.6, "text": " sit down with a 3D modeler program and spend a few hours sculpting a similar 3D geometry."}, {"start": 33.6, "end": 38.44, "text": " And I will quickly note that our binocular vision is not entirely necessary to make this"}, {"start": 38.44, "end": 39.44, "text": " happen."}, {"start": 39.44, "end": 44.16, "text": " We can look at 2D images all day long and still have a good idea about the shape of an"}, {"start": 44.16, "end": 46.8, "text": " airplane, even with one eye closed."}, {"start": 46.8, "end": 50.84, "text": " We had previous episodes on this problem, and the verdict was that the results with"}, {"start": 50.84, "end": 54.2, "text": " previous techniques are great, but not very detailed."}, {"start": 54.2, "end": 59.56, "text": " Some mathematicians like to say that this algorithm has a cubic complexity, or cubic scaling,"}, {"start": 59.56, "end": 64.44, "text": " which means that if we wish to increase the resolution of the 3D model, just a tiny"}, {"start": 64.44, "end": 65.44, "text": " bit."}, {"start": 65.44, "end": 69.4, "text": " We have to wait not a tiny bit longer, but significantly longer."}, {"start": 69.4, "end": 74.04, "text": " And the cubic part means that this trade-off becomes unbearable, even for moderately"}, {"start": 74.04, "end": 75.52000000000001, "text": " high resolutions."}, {"start": 75.52000000000001, "end": 79.16, "text": " This paper offers a technique to break through this limitation."}, {"start": 79.16, "end": 84.04, "text": " This new technique still uses a learning algorithm to predict the geometry, but it creates"}, {"start": 84.04, "end": 86.68, "text": " these 3D models hierarchically."}, {"start": 86.68, "end": 91.72, "text": " This means that it starts out approximating the coarse geometry of the output and restarts"}, {"start": 91.72, "end": 95.32000000000001, "text": " the process by adding more and more fine details to it."}, {"start": 95.32000000000001, "end": 99.36000000000001, "text": " The geometry becomes more and more refined over several steps."}, {"start": 99.36000000000001, "end": 105.08000000000001, "text": " Now this refinement doesn't just work unless we have a carefully designed algorithm around"}, {"start": 105.08000000000001, "end": 106.08000000000001, "text": " it."}, {"start": 106.08000000000001, "end": 111.36000000000001, "text": " The refinement happens by using additional information in each step from the created model."}, {"start": 111.36, "end": 117.4, "text": " Namely, we imagine are predicted 3D geometry as a collection of small blocks, and each block"}, {"start": 117.4, "end": 123.32, "text": " is classified as either free space, occupied space, or as a surface."}, {"start": 123.32, "end": 128.28, "text": " After this classification happened, we have the possibility to focus our efforts on refining"}, {"start": 128.28, "end": 134.44, "text": " the surface of the model, leading to a significant improvement in the execution time of the algorithm."}, {"start": 134.44, "end": 139.76, "text": " As a result, we get 3D models that are of higher quality than the ones offered by previous"}, {"start": 139.76, "end": 140.76, "text": " techniques."}, {"start": 140.76, "end": 145.72, "text": " The outputs are still not super high resolution, but they capture a fair number of surface"}, {"start": 145.72, "end": 146.72, "text": " detail."}, {"start": 146.72, "end": 151.6, "text": " And you know the drill, research is a process, and every paper is a stepping stone."}, {"start": 151.6, "end": 156.04, "text": " And this is one of those stepping stones that can potentially save many hours of work"}, {"start": 156.04, "end": 158.12, "text": " for 3D artists in the industry."}, {"start": 158.12, "end": 178.24, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZtP3gl_2kBM
AI Creates Facial Animation From Audio | Two Minute Papers #185
The paper "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion" is available here: http://research.nvidia.com/publication/2017-07_Audio-Driven-Facial-Animation Our Patreon page and the newest post on empowering research projects: https://www.patreon.com/TwoMinutePapers https://www.patreon.com/posts/14199475 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Dave Rushton-Smith, Dennis Abts, Eric Swenson, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2308464/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars,這兩位是Papers with Karo and Jona and Fahir你們真討厭,吃飽了和足了,就做哪看我說話這件事是在做了一場電影的時間這就說了一場電影的電影我們在做了一場電影的電影這就說了一場電影的電影的電影這就說了一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影的電影的電影這就是一場電影的電影的電影這就是一場電影的電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影這就是一場電影的電影這就是一場電影的電影這就是一場電影的電影這就是一場電影這就是一場電影這就是一場電影這就是一場電影這就是一場電影這就是一場電影這就是一場電影這就是一場電影這就是一場電影這就是一場電影這就是一場電影然后说什么每个字都在这里然后那种字幕可以用这个技术为了一个确实的形式说什么所以我们可以从确实的形式上然后把形式放在一个确实的形式上用这个确实的形式鞋子是一块儿的用链锋的链锋用链锋的链锋的链锋这个确实的形式上和形式上和形式上和形式上和形式上和形式上和形式上和形式上和形式上和形式上这种形式我们得到一个确实的形式用链锋和众用链锋和众用链锋和众用链锋和众用链锋和众用链锋和众用链锋和众用链锋和众用链锋和众用链锋和众用链锋和众用链锋和众用链锋在旁边看那边,有两个人的影响会设定了怎么说的?让我们看一看,让我们看三个绿色的影响会设定了让我们看一看,让我们看三个绿色的影响会设定了让我们看一看,让我们看一看,让我们看一看让我们看一看,让我们看一看让我们看一看,让我们看一看让我们看一看,让我们看一看让我们看一看,让我们看一看让我们看一看让我们看一看让我们看一看让我们看一看让我们看一看让我们看一看让我们看一看让我们看一看让我们看一看让我们看一看和以调整的这种团团一起拿出来的抽象和以它的特技和以它的特技和以它的特技和以它的特技和以它的特技和以它的特技和以它的特技和以它的特技和以它的特技和以它的特技和以他的特技和以他的特技和以他的特技和以他的特技和以他的特技和以他的特技和以他的特技和以他的特技还有以他的特技和以他的特技还有以他的特技和以他的特技还有以他的特技和以他的特技还有以他的特技还有以他的特技还有以他的特技和以他的特技最高的一切是比较比较比较的而 result 没有比较比较这种方法是不太大但我没有认为一个箱子的箱子或箱子在这种箱子没有比较比较而这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子在这种箱子你会做这种箱子你会要去到很快的我希望你会到最后的箱子然后牙利能够得到用了一条影片你会拿出的箱子像看这种箱子在最后的箱子这种箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子在最后的箱子真是为了我给你拿出来的箱子拿出来的箱子我给你拿出来的箱子我给你拿出来的箱子我给你拿出来的箱子我给你拿出来的箱子我给你拿出来的箱子我给你拿出来的箱子我给你拿出来的箱子我给你拿出来的箱子我给你拿出来的箱子
[{"start": 0.0, "end": 1.68, "text": "Dear Fellow Scholars,"}, {"start": 1.68, "end": 4.68, "text": "\u9019\u5169\u4f4d\u662fPapers with Karo and Jona and Fahir"}, {"start": 4.68, "end": 21.68, "text": "\u4f60\u5011\u771f\u8a0e\u53ad,\u5403\u98fd\u4e86\u548c\u8db3\u4e86,\u5c31\u505a\u54ea\u770b\u6211\u8aaa\u8a71"}, {"start": 35.68, "end": 41.68, "text": "\u9019\u4ef6\u4e8b\u662f\u5728\u505a\u4e86\u4e00\u5834\u96fb\u5f71\u7684\u6642\u9593"}, {"start": 41.68, "end": 46.68, "text": "\u9019\u5c31\u8aaa\u4e86\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 46.68, "end": 48.68, "text": "\u6211\u5011\u5728\u505a\u4e86\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 48.68, "end": 51.68, "text": "\u9019\u5c31\u8aaa\u4e86\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 51.68, "end": 54.68, "text": "\u9019\u5c31\u8aaa\u4e86\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 54.68, "end": 57.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 57.68, "end": 59.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 59.68, "end": 63.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71\u7684\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 63.68, "end": 66.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 66.68, "end": 69.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 69.68, "end": 71.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 71.68, "end": 74.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 74.68, "end": 77.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 77.68, "end": 80.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 80.68, "end": 83.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 83.68, "end": 86.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 86.68, "end": 89.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 89.68, "end": 92.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 92.68, "end": 95.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 95.68, "end": 97.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 97.68, "end": 100.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 100.68, "end": 103.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 103.68, "end": 106.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 106.68, "end": 109.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 109.68, "end": 112.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 112.68, "end": 115.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 115.68, "end": 118.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71\u7684\u96fb\u5f71"}, {"start": 118.68, "end": 120.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 120.68, "end": 123.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 123.68, "end": 125.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 125.68, "end": 127.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 127.68, "end": 129.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 129.68, "end": 131.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 131.68, "end": 133.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 133.68, "end": 135.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 135.68, "end": 137.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 137.68, "end": 139.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 139.68, "end": 141.68, "text": "\u9019\u5c31\u662f\u4e00\u5834\u96fb\u5f71"}, {"start": 141.68, "end": 144.24, "text": "\u7136\u540e\u8bf4\u4ec0\u4e48\u6bcf\u4e2a\u5b57\u90fd\u5728\u8fd9\u91cc"}, {"start": 144.24, "end": 145.20000000000002, "text": "\u7136\u540e"}, {"start": 145.20000000000002, "end": 147.68, "text": "\u90a3\u79cd\u5b57\u5e55\u53ef\u4ee5\u7528\u8fd9\u4e2a\u6280\u672f"}, {"start": 147.68, "end": 149.20000000000002, "text": "\u4e3a\u4e86\u4e00\u4e2a\u786e\u5b9e\u7684\u5f62\u5f0f"}, {"start": 149.20000000000002, "end": 150.64000000000001, "text": "\u8bf4\u4ec0\u4e48"}, {"start": 150.64000000000001, "end": 151.84, "text": "\u6240\u4ee5\u6211\u4eec\u53ef\u4ee5\u4ece"}, {"start": 151.84, "end": 153.44, "text": "\u786e\u5b9e\u7684\u5f62\u5f0f\u4e0a"}, {"start": 153.44, "end": 154.8, "text": "\u7136\u540e\u628a\u5f62\u5f0f\u653e\u5728"}, {"start": 154.8, "end": 156.0, "text": "\u4e00\u4e2a\u786e\u5b9e\u7684\u5f62\u5f0f\u4e0a"}, {"start": 156.0, "end": 156.96, "text": "\u7528\u8fd9\u4e2a\u786e\u5b9e\u7684\u5f62\u5f0f"}, {"start": 162.8, "end": 164.48000000000002, "text": "\u978b\u5b50\u662f\u4e00\u5757\u513f\u7684"}, {"start": 164.48000000000002, "end": 165.68, "text": "\u7528\u94fe\u950b\u7684\u94fe\u950b"}, {"start": 165.68, "end": 166.88, "text": "\u7528\u94fe\u950b\u7684\u94fe\u950b\u7684\u94fe\u950b"}, {"start": 166.88, "end": 169.6, "text": "\u8fd9\u4e2a\u786e\u5b9e\u7684\u5f62\u5f0f\u4e0a"}, {"start": 169.6, "end": 170.72, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 170.72, "end": 171.92, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 171.92, "end": 172.88, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 172.88, "end": 173.92, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 173.92, "end": 175.84, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 175.84, "end": 176.96, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 176.96, "end": 178.16, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 178.16, "end": 179.2, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 179.2, "end": 180.0, "text": "\u548c\u5f62\u5f0f\u4e0a"}, {"start": 180.0, "end": 181.92, "text": "\u8fd9\u79cd\u5f62\u5f0f"}, {"start": 181.92, "end": 184.24, "text": "\u6211\u4eec\u5f97\u5230\u4e00\u4e2a\u786e\u5b9e\u7684\u5f62\u5f0f"}, {"start": 184.24, "end": 185.28, "text": "\u7528\u94fe\u950b"}, {"start": 185.28, "end": 186.72, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 186.72, "end": 188.0, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 188.0, "end": 189.28, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 189.28, "end": 190.72, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 190.72, "end": 191.51999999999998, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 191.51999999999998, "end": 192.4, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 192.4, "end": 193.12, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 193.12, "end": 193.12, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 193.12, "end": 194.07999999999998, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 194.07999999999998, "end": 195.12, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 195.12, "end": 195.12, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 195.12, "end": 195.84, "text": "\u548c\u4f17\u7528\u94fe\u950b"}, {"start": 195.84, "end": 201.02, "text": "\u5728\u65c1\u8fb9\u770b\u90a3\u8fb9,\u6709\u4e24\u4e2a\u4eba\u7684\u5f71\u54cd\u4f1a\u8bbe\u5b9a\u4e86"}, {"start": 201.02, "end": 202.18, "text": "\u600e\u4e48\u8bf4\u7684?"}, {"start": 202.18, "end": 206.98000000000002, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b,\u8ba9\u6211\u4eec\u770b\u4e09\u4e2a\u7eff\u8272\u7684\u5f71\u54cd\u4f1a\u8bbe\u5b9a\u4e86"}, {"start": 206.98000000000002, "end": 208.08, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b,\u8ba9\u6211\u4eec\u770b\u4e09\u4e2a\u7eff\u8272\u7684\u5f71\u54cd\u4f1a\u8bbe\u5b9a\u4e86"}, {"start": 208.08, "end": 209.54, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b,\u8ba9\u6211\u4eec\u770b\u4e00\u770b,\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 209.54, "end": 210.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b,\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 210.84, "end": 211.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b,\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 211.84, "end": 212.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b,\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 212.84, "end": 213.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b,\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 213.84, "end": 214.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 214.84, "end": 215.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 215.84, "end": 216.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 216.84, "end": 217.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 217.84, "end": 218.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 218.84, "end": 219.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 219.84, "end": 220.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 220.84, "end": 221.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 221.84, "end": 222.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 222.84, "end": 223.84, "text": "\u8ba9\u6211\u4eec\u770b\u4e00\u770b"}, {"start": 223.84, "end": 226.84, "text": "\u548c\u4ee5\u8c03\u6574\u7684\u8fd9\u79cd\u56e2\u56e2\u4e00\u8d77\u62ff\u51fa\u6765\u7684\u62bd\u8c61"}, {"start": 226.84, "end": 228.84, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 228.84, "end": 229.34, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 229.34, "end": 230.38, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 230.38, "end": 231.58, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 231.58, "end": 233.58, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 233.58, "end": 234.58, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 234.58, "end": 235.08, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 235.08, "end": 235.58, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 235.58, "end": 236.58, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 236.58, "end": 237.18, "text": "\u548c\u4ee5\u5b83\u7684\u7279\u6280"}, {"start": 237.18, "end": 237.78, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 237.78, "end": 238.12, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 238.12, "end": 238.88, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 238.88, "end": 240.18, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 240.18, "end": 240.78, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 240.78, "end": 241.82, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 241.82, "end": 242.88, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 242.88, "end": 243.08, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 243.08, "end": 243.92000000000002, "text": "\u8fd8\u6709\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 243.92000000000002, "end": 243.88, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 243.88, "end": 244.92000000000002, "text": "\u8fd8\u6709\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 244.92000000000002, "end": 245.52, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 245.52, "end": 246.46, "text": "\u8fd8\u6709\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 246.46, "end": 246.94, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 246.94, "end": 247.06, "text": "\u8fd8\u6709\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 247.06, "end": 248.12, "text": "\u8fd8\u6709\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 248.12, "end": 249.02, "text": "\u8fd8\u6709\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 249.02, "end": 249.28, "text": "\u548c\u4ee5\u4ed6\u7684\u7279\u6280"}, {"start": 249.28, "end": 251.28, "text": "\u6700\u9ad8\u7684\u4e00\u5207\u662f\u6bd4\u8f83\u6bd4\u8f83\u6bd4\u8f83\u7684"}, {"start": 251.28, "end": 253.28, "text": "\u800c result \u6ca1\u6709\u6bd4\u8f83\u6bd4\u8f83"}, {"start": 253.28, "end": 255.28, "text": "\u8fd9\u79cd\u65b9\u6cd5\u662f\u4e0d\u592a\u5927"}, {"start": 255.28, "end": 257.28, "text": "\u4f46\u6211\u6ca1\u6709\u8ba4\u4e3a"}, {"start": 257.28, "end": 259.28, "text": "\u4e00\u4e2a\u7bb1\u5b50\u7684\u7bb1\u5b50"}, {"start": 259.28, "end": 260.28, "text": "\u6216\u7bb1\u5b50"}, {"start": 260.28, "end": 261.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 261.28, "end": 262.28, "text": "\u6ca1\u6709\u6bd4\u8f83\u6bd4\u8f83"}, {"start": 262.28, "end": 263.28, "text": "\u800c\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 263.28, "end": 264.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 264.28, "end": 265.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 265.28, "end": 267.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 267.28, "end": 268.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 268.28, "end": 269.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 269.28, "end": 271.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 271.28, "end": 273.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 273.28, "end": 275.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 275.28, "end": 277.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 277.28, "end": 278.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 278.28, "end": 279.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 279.28, "end": 281.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 281.28, "end": 282.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 282.28, "end": 284.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 284.28, "end": 285.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 285.28, "end": 286.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 286.28, "end": 287.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 287.28, "end": 288.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 288.28, "end": 289.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 289.28, "end": 290.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 290.28, "end": 291.28, "text": "\u5728\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 291.28, "end": 292.28, "text": "\u4f60\u4f1a\u505a\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 292.28, "end": 302.28, "text": "\u4f60\u4f1a\u8981\u53bb\u5230\u5f88\u5feb\u7684"}, {"start": 302.28, "end": 304.28, "text": "\u6211\u5e0c\u671b\u4f60\u4f1a\u5230\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 304.28, "end": 305.28, "text": "\u7136\u540e"}, {"start": 305.28, "end": 306.28, "text": "\u7259\u5229\u80fd\u591f\u5f97\u5230"}, {"start": 306.28, "end": 308.28, "text": "\u7528\u4e86\u4e00\u6761\u5f71\u7247"}, {"start": 308.28, "end": 309.28, "text": "\u4f60\u4f1a\u62ff\u51fa\u7684\u7bb1\u5b50"}, {"start": 309.28, "end": 310.28, "text": "\u50cf\u770b\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 310.28, "end": 311.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 311.28, "end": 312.28, "text": "\u8fd9\u79cd\u7bb1\u5b50"}, {"start": 312.28, "end": 313.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 313.28, "end": 314.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 314.28, "end": 315.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 315.28, "end": 316.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 316.28, "end": 317.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 317.28, "end": 318.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 318.28, "end": 319.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 319.28, "end": 320.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 320.28, "end": 321.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 321.28, "end": 322.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 322.28, "end": 324.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 324.28, "end": 325.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 325.28, "end": 326.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 326.28, "end": 327.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 327.28, "end": 328.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 328.28, "end": 329.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 329.28, "end": 330.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 330.28, "end": 331.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 331.28, "end": 332.28, "text": "\u5728\u6700\u540e\u7684\u7bb1\u5b50"}, {"start": 332.28, "end": 336.28, "text": "\u771f\u662f\u4e3a\u4e86\u6211"}, {"start": 336.28, "end": 339.28, "text": "\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 339.28, "end": 341.28, "text": "\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 341.28, "end": 343.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 343.28, "end": 345.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 345.28, "end": 348.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 348.28, "end": 350.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 350.28, "end": 353.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 353.28, "end": 355.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 355.28, "end": 357.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 357.28, "end": 359.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}, {"start": 359.28, "end": 360.28, "text": "\u6211\u7ed9\u4f60\u62ff\u51fa\u6765\u7684\u7bb1\u5b50"}]
Two Minute Papers
https://www.youtube.com/watch?v=mL3CzZcBJZU
DeepMind's AI Learns Audio And Video Concepts By Itself | Two Minute Papers #184
The paper "Look, Listen and Learn" is available here: https://arxiv.org/abs/1705.08168 Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers Recommended for you: https://www.youtube.com/watch?v=hBobYd8nNtQ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, Emmanuel Mwangi, Eric Swenson, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1838412/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Ifehir. In our earlier episodes, when it came to learning techniques, we almost always talked about supervised learning. This means that we give the algorithm a bunch of images and some additional information, for instance, that these images depict dogs or cats. Then, the learning algorithm is exposed to new images that it had never seen before and has to be able to classify them correctly. This kind of like a teacher sitting next to a student providing supervision. Then, the exam comes with new questions. This is supervised learning, and as you have seen from more than 180 episodes of two-minute papers, there is no doubt that this is an enormously successful field of research. However, this means that we have to label our datasets, so we have to add some additional information to every image we have. This is a very laborious task, which is typically performed by researchers, or through crowdsourcing, both of which takes a lot of funding and hundreds of work hours. But if we think about it, we have a ton of videos on the internet. You always hear these mind-melting new statistics on how many hours of video footage is uploaded to YouTube every day. Of course, we could hire all the employees in the world to annotate these videos frame-by-frame to tell the algorithm that this is a guitar, this is an accordion, or a keyboard, and we would still not be able to learn on most of what's uploaded. But it would be so great to have an algorithm that can learn on unlabeled data. However, there are learning techniques in the field of unsupervised learning, which means that the algorithm is given a bunch of images or any media and is instructed to learn on it without any additional information. There is no teacher to supervise the learning. The algorithm learns by itself. And in this work, the objective is to learn both visual and audio-related tasks in an unsupervised manner. So for instance, if we look at this layer of the visual subnetwork, we'll find neurons that get very excited when they see, for instance, someone playing on accordion. And each of the neurons in this layer belong to different object classes. I surely have something like this for papers. And here comes the katoi goes crazy part one. This technique not only classifies the frames of the videos, but it also creates semantic heat maps, which show us which part of the image is responsible for the sounds that we hear. This is insanity. To accomplish this, they ran a vision subnetwork on the video part and a separate audio subnetwork to learn about the sounds. And at the last step, all this information is fused together to obtain katoi goes crazy part two. This makes the network able to guess whether the audio and the video stream correspond to each other. It looks at a man with a fiddle, listens to a sound clip, and will say whether the two correspond to each other. Wow! The audio subnetwork also learned the concept of human voices, the sound of water, wind, music, live concerts, and much much more. And the answer is yes, it is remarkably close to human level performance on sound classification. And all this is provided by the two networks that were trained from scratch and no supervision is required. We don't need to annotate these videos. Nailed it. And please don't get this wrong, it's not like DeepMind has suddenly invented unsupervised learning. Not at all. This is a field that has been actively researched for decades. It's just that we rarely see really punchy results like these ones here. Truly incredible work. If you enjoyed this episode and you feel that 8 of these videos a month is worth a dollar, please consider supporting us on Patreon. Details are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.92, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Ifehir."}, {"start": 4.92, "end": 10.96, "text": " In our earlier episodes, when it came to learning techniques, we almost always talked about supervised"}, {"start": 10.96, "end": 11.96, "text": " learning."}, {"start": 11.96, "end": 16.68, "text": " This means that we give the algorithm a bunch of images and some additional information,"}, {"start": 16.68, "end": 20.56, "text": " for instance, that these images depict dogs or cats."}, {"start": 20.56, "end": 25.68, "text": " Then, the learning algorithm is exposed to new images that it had never seen before and"}, {"start": 25.68, "end": 28.32, "text": " has to be able to classify them correctly."}, {"start": 28.32, "end": 32.96, "text": " This kind of like a teacher sitting next to a student providing supervision."}, {"start": 32.96, "end": 35.88, "text": " Then, the exam comes with new questions."}, {"start": 35.88, "end": 41.72, "text": " This is supervised learning, and as you have seen from more than 180 episodes of two-minute"}, {"start": 41.72, "end": 47.08, "text": " papers, there is no doubt that this is an enormously successful field of research."}, {"start": 47.08, "end": 52.16, "text": " However, this means that we have to label our datasets, so we have to add some additional"}, {"start": 52.16, "end": 54.92, "text": " information to every image we have."}, {"start": 54.92, "end": 61.52, "text": " This is a very laborious task, which is typically performed by researchers, or through crowdsourcing,"}, {"start": 61.52, "end": 65.52, "text": " both of which takes a lot of funding and hundreds of work hours."}, {"start": 65.52, "end": 69.24000000000001, "text": " But if we think about it, we have a ton of videos on the internet."}, {"start": 69.24000000000001, "end": 74.96000000000001, "text": " You always hear these mind-melting new statistics on how many hours of video footage is uploaded"}, {"start": 74.96000000000001, "end": 76.52000000000001, "text": " to YouTube every day."}, {"start": 76.52000000000001, "end": 82.04, "text": " Of course, we could hire all the employees in the world to annotate these videos frame-by-frame"}, {"start": 82.04, "end": 87.28, "text": " to tell the algorithm that this is a guitar, this is an accordion, or a keyboard, and we"}, {"start": 87.28, "end": 91.16000000000001, "text": " would still not be able to learn on most of what's uploaded."}, {"start": 91.16000000000001, "end": 95.76, "text": " But it would be so great to have an algorithm that can learn on unlabeled data."}, {"start": 95.76, "end": 100.72, "text": " However, there are learning techniques in the field of unsupervised learning, which means"}, {"start": 100.72, "end": 106.04, "text": " that the algorithm is given a bunch of images or any media and is instructed to learn on"}, {"start": 106.04, "end": 108.60000000000001, "text": " it without any additional information."}, {"start": 108.60000000000001, "end": 110.96000000000001, "text": " There is no teacher to supervise the learning."}, {"start": 110.96, "end": 113.39999999999999, "text": " The algorithm learns by itself."}, {"start": 113.39999999999999, "end": 120.03999999999999, "text": " And in this work, the objective is to learn both visual and audio-related tasks in an unsupervised"}, {"start": 120.03999999999999, "end": 121.03999999999999, "text": " manner."}, {"start": 121.03999999999999, "end": 125.8, "text": " So for instance, if we look at this layer of the visual subnetwork, we'll find neurons"}, {"start": 125.8, "end": 131.04, "text": " that get very excited when they see, for instance, someone playing on accordion."}, {"start": 131.04, "end": 135.32, "text": " And each of the neurons in this layer belong to different object classes."}, {"start": 135.32, "end": 138.24, "text": " I surely have something like this for papers."}, {"start": 138.24, "end": 141.44, "text": " And here comes the katoi goes crazy part one."}, {"start": 141.44, "end": 146.88, "text": " This technique not only classifies the frames of the videos, but it also creates semantic"}, {"start": 146.88, "end": 152.24, "text": " heat maps, which show us which part of the image is responsible for the sounds that we"}, {"start": 152.24, "end": 153.24, "text": " hear."}, {"start": 153.24, "end": 154.72, "text": " This is insanity."}, {"start": 154.72, "end": 160.76000000000002, "text": " To accomplish this, they ran a vision subnetwork on the video part and a separate audio subnetwork"}, {"start": 160.76000000000002, "end": 162.48000000000002, "text": " to learn about the sounds."}, {"start": 162.48, "end": 168.2, "text": " And at the last step, all this information is fused together to obtain katoi goes crazy"}, {"start": 168.2, "end": 169.56, "text": " part two."}, {"start": 169.56, "end": 175.07999999999998, "text": " This makes the network able to guess whether the audio and the video stream correspond to"}, {"start": 175.07999999999998, "end": 176.07999999999998, "text": " each other."}, {"start": 176.07999999999998, "end": 180.83999999999997, "text": " It looks at a man with a fiddle, listens to a sound clip, and will say whether the two"}, {"start": 180.83999999999997, "end": 182.79999999999998, "text": " correspond to each other."}, {"start": 182.79999999999998, "end": 183.79999999999998, "text": " Wow!"}, {"start": 183.79999999999998, "end": 190.23999999999998, "text": " The audio subnetwork also learned the concept of human voices, the sound of water, wind,"}, {"start": 190.24, "end": 193.76000000000002, "text": " music, live concerts, and much much more."}, {"start": 193.76000000000002, "end": 199.68, "text": " And the answer is yes, it is remarkably close to human level performance on sound classification."}, {"start": 199.68, "end": 205.84, "text": " And all this is provided by the two networks that were trained from scratch and no supervision"}, {"start": 205.84, "end": 206.84, "text": " is required."}, {"start": 206.84, "end": 209.32000000000002, "text": " We don't need to annotate these videos."}, {"start": 209.32000000000002, "end": 210.32000000000002, "text": " Nailed it."}, {"start": 210.32000000000002, "end": 214.84, "text": " And please don't get this wrong, it's not like DeepMind has suddenly invented unsupervised"}, {"start": 214.84, "end": 215.84, "text": " learning."}, {"start": 215.84, "end": 216.84, "text": " Not at all."}, {"start": 216.84, "end": 220.0, "text": " This is a field that has been actively researched for decades."}, {"start": 220.0, "end": 224.64, "text": " It's just that we rarely see really punchy results like these ones here."}, {"start": 224.64, "end": 226.08, "text": " Truly incredible work."}, {"start": 226.08, "end": 231.16, "text": " If you enjoyed this episode and you feel that 8 of these videos a month is worth a dollar,"}, {"start": 231.16, "end": 233.84, "text": " please consider supporting us on Patreon."}, {"start": 233.84, "end": 236.24, "text": " Details are available in the video description."}, {"start": 236.24, "end": 256.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=qKhSZmS6aWw
Photorealistic Fur With Multi-Scale Rendering | Two Minute Papers #183
The paper "An Efficient and Practical Near and Far Field Fur Reflectance Model" is available here: https://people.eecs.berkeley.edu/~lingqi/publications/paper_fur2.pdf https://people.eecs.berkeley.edu/~lingqi/ The free Rendering course is available on YouTube here: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, Emmanuel Mwangi, Eric Swenson, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1238238/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir, creating a photorealistic image with fur and hair is hard. It is typically done by using light simulation programs where we use the laws of physics to simulate the path of millions and millions of light rays as they bounce off of different objects in the scene. This typically takes from minutes to hours if we are lucky. However, in the presence of materials like hair and fur, this problem becomes even more difficult because fur fibers have inner scattering media. This means that we not only have to bounce these rays off of the surface of objects, but also have to simulate how light is transmitted between these inner layers. And initially, we start out with a noisy image, and this noise gets slowly eliminated as we compute more and more rays for the simulation. SPP means samples per pixel, which is the number of rays we compute for each pixel in our image. You can see that with previous techniques, using 256 samples per pixel leads to a very noisy image, and we need to spend significantly more time to obtain a clear, converged image. And this new technique enables us to get the most out of our samples, and if we render an image with 256 SPP, we get a roughly equivalent quality to a previous technique using around 6 times as many samples. If we had a film studio and someone walked up on us and said that we can render the next guardians of the Galaxy film 6 times cheaper would surely be all over it. This would save us millions of dollars. The main selling point is that this work introduces a multi-scale model for rendering hair and fur. This means that it computes near and far-field scattering separately. The far-field scattering model contains simplifications, which means that it's way faster to compute. This simplification is sufficient if we look at the model from afar, or we look closely at the hair model that is way thinner than human hair strands. The near-field model is more faithful to reality, but also more expensive to compute. And the final, most important puzzle piece is stitching together the two. Whenever we can get away with it, we should use the far-field model and compute the expensive near-field model only when it makes a difference visually. And one more thing, as these hamsters get closer or further away from the camera, we need to make sure that there is no annoying jump when we are switching models. And as you can see, the animations are buttery smooth. And when we look at it, we see beautiful rendered images, and if we didn't know a bit about the theory, we would have no idea about the multi-scale wizardry under the hood. Excellent work. The paper also contains a set of deck compositions for different lightpaths. For instance, here you can see a fully rendered image on the left, and different combinations of light reflection and transmission events. For instance, R stands for one light reflection, TT for two transmission events, and so on. The S in the superscript denotes light scattering events. Adding up all the possible combinations of these T's and R's, we get the photorealistic image on the left. That's really cool, loving it. If you would like to learn more about light simulations, I'm holding a full, master-level course on it at the Technical University of Vienna. And the entirety of this course is available free of charge for everyone. I got some feedback from you fellow scholars that you watched it, and enjoyed it quite a bit. Give it a go. As always, the details are available in the video description. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.86, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir, creating a photorealistic"}, {"start": 5.86, "end": 9.14, "text": " image with fur and hair is hard."}, {"start": 9.14, "end": 13.96, "text": " It is typically done by using light simulation programs where we use the laws of physics"}, {"start": 13.96, "end": 19.400000000000002, "text": " to simulate the path of millions and millions of light rays as they bounce off of different"}, {"start": 19.400000000000002, "end": 20.76, "text": " objects in the scene."}, {"start": 20.76, "end": 24.28, "text": " This typically takes from minutes to hours if we are lucky."}, {"start": 24.28, "end": 29.2, "text": " However, in the presence of materials like hair and fur, this problem becomes even more"}, {"start": 29.2, "end": 33.62, "text": " difficult because fur fibers have inner scattering media."}, {"start": 33.62, "end": 38.78, "text": " This means that we not only have to bounce these rays off of the surface of objects, but"}, {"start": 38.78, "end": 43.58, "text": " also have to simulate how light is transmitted between these inner layers."}, {"start": 43.58, "end": 48.76, "text": " And initially, we start out with a noisy image, and this noise gets slowly eliminated as"}, {"start": 48.76, "end": 51.480000000000004, "text": " we compute more and more rays for the simulation."}, {"start": 51.480000000000004, "end": 56.68, "text": " SPP means samples per pixel, which is the number of rays we compute for each pixel in our"}, {"start": 56.68, "end": 57.68, "text": " image."}, {"start": 57.68, "end": 64.24, "text": " You can see that with previous techniques, using 256 samples per pixel leads to a very noisy"}, {"start": 64.24, "end": 69.96000000000001, "text": " image, and we need to spend significantly more time to obtain a clear, converged image."}, {"start": 69.96000000000001, "end": 74.4, "text": " And this new technique enables us to get the most out of our samples, and if we render"}, {"start": 74.4, "end": 81.16, "text": " an image with 256 SPP, we get a roughly equivalent quality to a previous technique using"}, {"start": 81.16, "end": 84.03999999999999, "text": " around 6 times as many samples."}, {"start": 84.04, "end": 89.12, "text": " If we had a film studio and someone walked up on us and said that we can render the next"}, {"start": 89.12, "end": 94.36000000000001, "text": " guardians of the Galaxy film 6 times cheaper would surely be all over it."}, {"start": 94.36000000000001, "end": 96.72, "text": " This would save us millions of dollars."}, {"start": 96.72, "end": 102.04, "text": " The main selling point is that this work introduces a multi-scale model for rendering hair"}, {"start": 102.04, "end": 103.04, "text": " and fur."}, {"start": 103.04, "end": 107.4, "text": " This means that it computes near and far-field scattering separately."}, {"start": 107.4, "end": 113.4, "text": " The far-field scattering model contains simplifications, which means that it's way faster to compute."}, {"start": 113.4, "end": 117.92, "text": " This simplification is sufficient if we look at the model from afar, or we look closely"}, {"start": 117.92, "end": 121.96000000000001, "text": " at the hair model that is way thinner than human hair strands."}, {"start": 121.96000000000001, "end": 127.56, "text": " The near-field model is more faithful to reality, but also more expensive to compute."}, {"start": 127.56, "end": 132.28, "text": " And the final, most important puzzle piece is stitching together the two."}, {"start": 132.28, "end": 137.0, "text": " Whenever we can get away with it, we should use the far-field model and compute the expensive"}, {"start": 137.0, "end": 140.96, "text": " near-field model only when it makes a difference visually."}, {"start": 140.96, "end": 146.12, "text": " And one more thing, as these hamsters get closer or further away from the camera, we need"}, {"start": 146.12, "end": 150.24, "text": " to make sure that there is no annoying jump when we are switching models."}, {"start": 150.24, "end": 153.56, "text": " And as you can see, the animations are buttery smooth."}, {"start": 153.56, "end": 158.20000000000002, "text": " And when we look at it, we see beautiful rendered images, and if we didn't know a bit about"}, {"start": 158.20000000000002, "end": 163.72, "text": " the theory, we would have no idea about the multi-scale wizardry under the hood."}, {"start": 163.72, "end": 164.72, "text": " Excellent work."}, {"start": 164.72, "end": 168.88, "text": " The paper also contains a set of deck compositions for different lightpaths."}, {"start": 168.88, "end": 173.51999999999998, "text": " For instance, here you can see a fully rendered image on the left, and different combinations"}, {"start": 173.51999999999998, "end": 176.44, "text": " of light reflection and transmission events."}, {"start": 176.44, "end": 182.56, "text": " For instance, R stands for one light reflection, TT for two transmission events, and so on."}, {"start": 182.56, "end": 186.35999999999999, "text": " The S in the superscript denotes light scattering events."}, {"start": 186.35999999999999, "end": 191.2, "text": " Adding up all the possible combinations of these T's and R's, we get the photorealistic"}, {"start": 191.2, "end": 192.2, "text": " image on the left."}, {"start": 192.2, "end": 194.48, "text": " That's really cool, loving it."}, {"start": 194.48, "end": 198.88, "text": " If you would like to learn more about light simulations, I'm holding a full, master-level"}, {"start": 198.88, "end": 202.0, "text": " course on it at the Technical University of Vienna."}, {"start": 202.0, "end": 206.16, "text": " And the entirety of this course is available free of charge for everyone."}, {"start": 206.16, "end": 210.16, "text": " I got some feedback from you fellow scholars that you watched it, and enjoyed it quite"}, {"start": 210.16, "end": 211.16, "text": " a bit."}, {"start": 211.16, "end": 212.16, "text": " Give it a go."}, {"start": 212.16, "end": 214.67999999999998, "text": " As always, the details are available in the video description."}, {"start": 214.68, "end": 234.68, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=St5lxIxYGkI
DeepMind Publishes StarCraft II Learning Environment | Two Minute Papers #182
The paper "StarCraft II: A New Challenge for Reinforcement Learning" and its source code is available here: https://arxiv.org/abs/1708.04782 https://github.com/Blizzard/s2client-proto WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Eric Swenson, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: Blizzard - http://media.blizzard.com/sc2/media/wallpapers/wall000/wall000-1600x1200.jpg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Ejona Efehir. This topic has been perhaps the most highly anticipated by you Fellow Scholars, and I am extremely excited to show you the first joint paper between Deep Mind and Blizzard on creating an AI program to play Starcraft 2. Hell yeah! Unfortunately we have a paper where every detail is meticulously described, so there's much less room for misunderstandings. And before we start, note that this is a preliminary work, so please don't expect superhuman performance. However difficult you thought this problem was, you'll see in a minute that it's way more complex than most people would think. But before we start, what is Starcraft 2? It is a highly technical strategy game which will be a huge challenge to write a formidable AI for because of three reasons. One, we have imperfect information with a partially observed map. If we wish to see what the opponent is up to, we have to devote resources to scouting, which may or may not be successful depending on the vigilance of the other player. Two, we need to select and control hundreds of units under heavy time pressure. One wrong decision and we can quickly lose most of our units and become unable to recover from it. And three, perhaps the most important part. Long-term strategies need to be developed where a poor decision in the early game can lead to a crushing defeat several thousands of actions later. These cases are especially difficult to identify and learn. However, we ran a bit too far ahead to the gameplay part. What needs to be emphasized is that there is a step number one before that. And that step number one is making sure that the AI is able to communicate and interact with the game, which requires a significant engineering effort. In this paper, a Python-based interface is described to make this happen. It is great to have companies like DeepMind and OpenAI who are devoted to lay down the foundations for such an interface, which is a herculean task. This work would likely have never seen the light of day if AI research would only take place in academia. Huge respect and much thanks for the DeepMind guys for making this happen. To play the game, Deep Reinforcement Learning is used, which you heard about earlier in the series. This is a powerful learning algorithm where a neural network is used to process the video input and is combined with a reinforcement learner. With reinforcement learning, we are observing the environment around us and choose the next action to maximize the score or reward. However, defining score was very easy in Atari Breakout because we knew that if the number of our lives drops to zero, we lost. And if we break a lot of breaks, our score improves. Simple. Not so much in StarCraft 2 because how do we know exactly if we are winning? What is the score we are trying to maximize? In this work, there are two definitions for score. One that we get to know at the very end that describes whether we won, had a tie or lost. This is the score that ultimately matters. However, this information is not available throughout the game to drive the reinforcement learner, so there is an intermediate score that is referred to as Blizzard Score in the paper, which involves a weighted sum of current resources and upgrades as well as our units and buildings. This sounds good for a first approximation since it is monotonically increasing if we win battles and manage our resources well and decreases when we are losing. However, there are many matches where the player with the more resources does not have enough time to spend it and ultimately loses a deciding encounter. It remains to be seen whether this is exactly what we need to maximize to beat a formidable human player. There are also non-trivial engineering decisions on how to process the video stream. The current system uses a set of feature layers, which encode relevant information for the AI, such as terrain height, the camera location, heatpoints for the units on the screen and much, much more. There is a huge amount of information that the convolutional neural network has to make sense of. And I think it is now easy to see that starting out with throwing the AI in the deep water and expecting it to perform well on a full one versus one match at this point is a for loan effort. The paper describes a set of mini-games where the algorithm can learn different aspects of the game in isolation, such as picking up mineral shards scattered around the map, defeating enemy units in small skirmishes, building units or harvesting resources. In these mini-games, the AI has reached the level of a novice human player which is quite amazing given the magnitude and the complexity of the problem. The authors also encourage the community to create more mini-games for the AI to train on. I really love the openness and the community-effort aspects of this work. And with only just scratched the surface, there is so much more in the paper with a lot more non-trivial design decisions and the database with tens of thousands of recorded games. And the best part is that the source code for this environment is available right now for the fellow tinkerers out there. I've put a link to this in the video description. This is going to be one heck of a challenge for even the brightest AI researchers of our time. I can't wait to get my hands on the code and also I am very excited to read some follow-up papers on this. I expect there will be many of those in the following months. In the meantime, as we know, OpenAI is also working on Dota with remarkable results and there's lots of discussion whether a Dota 5 vs 5 or a StarCraft 2 1 vs 1 game is more complex for the AI to learn. If you have an opinion on this, make sure to leave a comment below this video, which is more complex. Why? This also signals that there's going to be tons of fun to be had with AI and video games this year. Stay tuned. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Ejona Efehir."}, {"start": 4.8, "end": 10.76, "text": " This topic has been perhaps the most highly anticipated by you Fellow Scholars, and I am extremely"}, {"start": 10.76, "end": 16.88, "text": " excited to show you the first joint paper between Deep Mind and Blizzard on creating an"}, {"start": 16.88, "end": 20.28, "text": " AI program to play Starcraft 2."}, {"start": 20.28, "end": 21.28, "text": " Hell yeah!"}, {"start": 21.28, "end": 26.36, "text": " Unfortunately we have a paper where every detail is meticulously described, so there's much"}, {"start": 26.36, "end": 28.48, "text": " less room for misunderstandings."}, {"start": 28.48, "end": 33.08, "text": " And before we start, note that this is a preliminary work, so please don't expect"}, {"start": 33.08, "end": 34.8, "text": " superhuman performance."}, {"start": 34.8, "end": 39.2, "text": " However difficult you thought this problem was, you'll see in a minute that it's way"}, {"start": 39.2, "end": 41.96, "text": " more complex than most people would think."}, {"start": 41.96, "end": 44.96, "text": " But before we start, what is Starcraft 2?"}, {"start": 44.96, "end": 49.88, "text": " It is a highly technical strategy game which will be a huge challenge to write a formidable"}, {"start": 49.88, "end": 52.88, "text": " AI for because of three reasons."}, {"start": 52.88, "end": 57.32, "text": " One, we have imperfect information with a partially observed map."}, {"start": 57.32, "end": 62.32, "text": " If we wish to see what the opponent is up to, we have to devote resources to scouting,"}, {"start": 62.32, "end": 66.72, "text": " which may or may not be successful depending on the vigilance of the other player."}, {"start": 66.72, "end": 72.16, "text": " Two, we need to select and control hundreds of units under heavy time pressure."}, {"start": 72.16, "end": 77.24000000000001, "text": " One wrong decision and we can quickly lose most of our units and become unable to recover"}, {"start": 77.24000000000001, "end": 78.24000000000001, "text": " from it."}, {"start": 78.24000000000001, "end": 81.4, "text": " And three, perhaps the most important part."}, {"start": 81.4, "end": 85.64, "text": " Long-term strategies need to be developed where a poor decision in the early game can"}, {"start": 85.64, "end": 89.84, "text": " lead to a crushing defeat several thousands of actions later."}, {"start": 89.84, "end": 93.6, "text": " These cases are especially difficult to identify and learn."}, {"start": 93.6, "end": 97.04, "text": " However, we ran a bit too far ahead to the gameplay part."}, {"start": 97.04, "end": 101.2, "text": " What needs to be emphasized is that there is a step number one before that."}, {"start": 101.2, "end": 106.44, "text": " And that step number one is making sure that the AI is able to communicate and interact"}, {"start": 106.44, "end": 110.4, "text": " with the game, which requires a significant engineering effort."}, {"start": 110.4, "end": 114.64, "text": " In this paper, a Python-based interface is described to make this happen."}, {"start": 114.64, "end": 120.6, "text": " It is great to have companies like DeepMind and OpenAI who are devoted to lay down the foundations"}, {"start": 120.6, "end": 123.96000000000001, "text": " for such an interface, which is a herculean task."}, {"start": 123.96000000000001, "end": 128.72, "text": " This work would likely have never seen the light of day if AI research would only take place"}, {"start": 128.72, "end": 129.72, "text": " in academia."}, {"start": 129.72, "end": 133.88, "text": " Huge respect and much thanks for the DeepMind guys for making this happen."}, {"start": 133.88, "end": 138.4, "text": " To play the game, Deep Reinforcement Learning is used, which you heard about earlier in"}, {"start": 138.4, "end": 139.4, "text": " the series."}, {"start": 139.4, "end": 144.16, "text": " This is a powerful learning algorithm where a neural network is used to process the video"}, {"start": 144.16, "end": 147.72, "text": " input and is combined with a reinforcement learner."}, {"start": 147.72, "end": 152.24, "text": " With reinforcement learning, we are observing the environment around us and choose the next"}, {"start": 152.24, "end": 155.28, "text": " action to maximize the score or reward."}, {"start": 155.28, "end": 161.2, "text": " However, defining score was very easy in Atari Breakout because we knew that if the number"}, {"start": 161.2, "end": 164.35999999999999, "text": " of our lives drops to zero, we lost."}, {"start": 164.35999999999999, "end": 167.6, "text": " And if we break a lot of breaks, our score improves."}, {"start": 167.6, "end": 168.6, "text": " Simple."}, {"start": 168.6, "end": 173.4, "text": " Not so much in StarCraft 2 because how do we know exactly if we are winning?"}, {"start": 173.4, "end": 176.4, "text": " What is the score we are trying to maximize?"}, {"start": 176.4, "end": 179.32, "text": " In this work, there are two definitions for score."}, {"start": 179.32, "end": 185.32, "text": " One that we get to know at the very end that describes whether we won, had a tie or lost."}, {"start": 185.32, "end": 187.6, "text": " This is the score that ultimately matters."}, {"start": 187.6, "end": 192.48000000000002, "text": " However, this information is not available throughout the game to drive the reinforcement"}, {"start": 192.48000000000002, "end": 197.4, "text": " learner, so there is an intermediate score that is referred to as Blizzard Score in the"}, {"start": 197.4, "end": 203.04000000000002, "text": " paper, which involves a weighted sum of current resources and upgrades as well as our units"}, {"start": 203.04, "end": 204.04, "text": " and buildings."}, {"start": 204.04, "end": 209.4, "text": " This sounds good for a first approximation since it is monotonically increasing if we win"}, {"start": 209.4, "end": 213.95999999999998, "text": " battles and manage our resources well and decreases when we are losing."}, {"start": 213.95999999999998, "end": 218.2, "text": " However, there are many matches where the player with the more resources does not have"}, {"start": 218.2, "end": 222.39999999999998, "text": " enough time to spend it and ultimately loses a deciding encounter."}, {"start": 222.39999999999998, "end": 227.76, "text": " It remains to be seen whether this is exactly what we need to maximize to beat a formidable"}, {"start": 227.76, "end": 228.76, "text": " human player."}, {"start": 228.76, "end": 233.6, "text": " There are also non-trivial engineering decisions on how to process the video stream."}, {"start": 233.6, "end": 239.0, "text": " The current system uses a set of feature layers, which encode relevant information for the"}, {"start": 239.0, "end": 245.04, "text": " AI, such as terrain height, the camera location, heatpoints for the units on the screen and"}, {"start": 245.04, "end": 246.04, "text": " much, much more."}, {"start": 246.04, "end": 250.48, "text": " There is a huge amount of information that the convolutional neural network has to make"}, {"start": 250.48, "end": 251.48, "text": " sense of."}, {"start": 251.48, "end": 256.8, "text": " And I think it is now easy to see that starting out with throwing the AI in the deep water"}, {"start": 256.8, "end": 262.76, "text": " and expecting it to perform well on a full one versus one match at this point is a for"}, {"start": 262.76, "end": 263.76, "text": " loan effort."}, {"start": 263.76, "end": 268.32, "text": " The paper describes a set of mini-games where the algorithm can learn different aspects"}, {"start": 268.32, "end": 273.56, "text": " of the game in isolation, such as picking up mineral shards scattered around the map,"}, {"start": 273.56, "end": 279.2, "text": " defeating enemy units in small skirmishes, building units or harvesting resources."}, {"start": 279.2, "end": 284.2, "text": " In these mini-games, the AI has reached the level of a novice human player which is quite"}, {"start": 284.2, "end": 287.76, "text": " amazing given the magnitude and the complexity of the problem."}, {"start": 287.76, "end": 292.68, "text": " The authors also encourage the community to create more mini-games for the AI to train"}, {"start": 292.68, "end": 293.68, "text": " on."}, {"start": 293.68, "end": 297.44, "text": " I really love the openness and the community-effort aspects of this work."}, {"start": 297.44, "end": 302.0, "text": " And with only just scratched the surface, there is so much more in the paper with a lot"}, {"start": 302.0, "end": 307.4, "text": " more non-trivial design decisions and the database with tens of thousands of recorded"}, {"start": 307.4, "end": 308.4, "text": " games."}, {"start": 308.4, "end": 313.52, "text": " And the best part is that the source code for this environment is available right now"}, {"start": 313.52, "end": 315.4, "text": " for the fellow tinkerers out there."}, {"start": 315.4, "end": 318.0, "text": " I've put a link to this in the video description."}, {"start": 318.0, "end": 322.79999999999995, "text": " This is going to be one heck of a challenge for even the brightest AI researchers of our"}, {"start": 322.79999999999995, "end": 323.79999999999995, "text": " time."}, {"start": 323.79999999999995, "end": 328.52, "text": " I can't wait to get my hands on the code and also I am very excited to read some follow-up"}, {"start": 328.52, "end": 329.52, "text": " papers on this."}, {"start": 329.52, "end": 332.71999999999997, "text": " I expect there will be many of those in the following months."}, {"start": 332.71999999999997, "end": 338.12, "text": " In the meantime, as we know, OpenAI is also working on Dota with remarkable results and"}, {"start": 338.12, "end": 344.76, "text": " there's lots of discussion whether a Dota 5 vs 5 or a StarCraft 2 1 vs 1 game is more"}, {"start": 344.76, "end": 346.84000000000003, "text": " complex for the AI to learn."}, {"start": 346.84000000000003, "end": 350.68, "text": " If you have an opinion on this, make sure to leave a comment below this video, which"}, {"start": 350.68, "end": 352.08, "text": " is more complex."}, {"start": 352.08, "end": 353.08, "text": " Why?"}, {"start": 353.08, "end": 357.68, "text": " This also signals that there's going to be tons of fun to be had with AI and video games"}, {"start": 357.68, "end": 358.68, "text": " this year."}, {"start": 358.68, "end": 359.68, "text": " Stay tuned."}, {"start": 359.68, "end": 378.64, "text": " Thanks for watching and for your generous support and I'll see you next time."}]