CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=HSmm_vEVs10
Real-Time Noise Filtering For Light Simulations | Two Minute Papers #181
The paper "Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination" is available here: http://cg.ivd.kit.edu/svgf.php WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Eric Swenson, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. In this series, we talk a lot about photorealistic rendering, which is one of the most exciting areas in computer graphics research. Photo-realistic rendering means that we put virtual objects in a scene, assign material models to them, and then run a light simulation program to create a beautiful image. This image depicts how these objects would look like in reality. This is particularly useful for the film industry, because we can create highly realistic scenes and set them up in a way that we couldn't do in real life. We can have any possible object we can imagine, light sources that we wouldn't ever be able to buy, change the time of the day, or even the planet we are on. Practically we have an infinite budget. That's amazing. However, creating such an image takes a long time, often in the order of hours, two days. You can see me render an image, and even if the footage is sped up significantly, you can see that this is going to take a long, long time. SPP means samples per pixel, so this is the number of light rays we compute for every pixel. The more SPP, the cleaner, more converged image we get. This technique performs spatial temporal filtering. This means that we take a noisy input video and try to eliminate the noise and try to guess how the final image would look like. And it can create almost fully converged images from extremely noisy inputs. Well, as you can see, these videos are created with one sample per pixel, which is as noisy as it gets. These images with the one sample per pixel can be created extremely quickly, in less than 10 milliseconds per image, and this new denoiser also takes around 10 milliseconds to reconstruct the final image from the noisy input. And yes, you heard it right, this is finally a real-time result. This all happens through decoupling the direct and indirect effect of light sources and denoising them separately. In the meantime, the algorithm also tries to estimate the amount of noise in different parts of the image to provide more useful information for the denoising routines. The fact that the entire pipeline runs on the graphics card is a great testament to the simplicity of this algorithm. Here you see the term SVGF, you see the results of the new technique. So we have these noisy input images with one SPP and look at that. Wow! This is one of those papers that looks like magic, and know your own networks or learning algorithms have been used in this work. Not so long ago, I speculated or more accurately hoped that real-time photorealistic rendering would be a possibility during my lifetime. In just a few years later, this paper appears. We know that the rate of progress in computer graphics research is just staggering, but this is too much to handle. Super excited to see where the artists will take this, and of course, I'll be here to show you the coolest follow-up works. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.72, "end": 10.02, "text": " In this series, we talk a lot about photorealistic rendering, which is one of the most exciting"}, {"start": 10.02, "end": 12.66, "text": " areas in computer graphics research."}, {"start": 12.66, "end": 17.7, "text": " Photo-realistic rendering means that we put virtual objects in a scene, assign material"}, {"start": 17.7, "end": 23.52, "text": " models to them, and then run a light simulation program to create a beautiful image."}, {"start": 23.52, "end": 27.740000000000002, "text": " This image depicts how these objects would look like in reality."}, {"start": 27.74, "end": 33.26, "text": " This is particularly useful for the film industry, because we can create highly realistic scenes"}, {"start": 33.26, "end": 36.94, "text": " and set them up in a way that we couldn't do in real life."}, {"start": 36.94, "end": 42.22, "text": " We can have any possible object we can imagine, light sources that we wouldn't ever be able"}, {"start": 42.22, "end": 46.66, "text": " to buy, change the time of the day, or even the planet we are on."}, {"start": 46.66, "end": 49.42, "text": " Practically we have an infinite budget."}, {"start": 49.42, "end": 50.42, "text": " That's amazing."}, {"start": 50.42, "end": 56.14, "text": " However, creating such an image takes a long time, often in the order of hours, two"}, {"start": 56.14, "end": 57.14, "text": " days."}, {"start": 57.14, "end": 61.58, "text": " You can see me render an image, and even if the footage is sped up significantly, you"}, {"start": 61.58, "end": 64.74, "text": " can see that this is going to take a long, long time."}, {"start": 64.74, "end": 70.34, "text": " SPP means samples per pixel, so this is the number of light rays we compute for every"}, {"start": 70.34, "end": 71.34, "text": " pixel."}, {"start": 71.34, "end": 74.98, "text": " The more SPP, the cleaner, more converged image we get."}, {"start": 74.98, "end": 78.22, "text": " This technique performs spatial temporal filtering."}, {"start": 78.22, "end": 84.06, "text": " This means that we take a noisy input video and try to eliminate the noise and try to guess"}, {"start": 84.06, "end": 86.3, "text": " how the final image would look like."}, {"start": 86.3, "end": 91.38, "text": " And it can create almost fully converged images from extremely noisy inputs."}, {"start": 91.38, "end": 97.14, "text": " Well, as you can see, these videos are created with one sample per pixel, which is as noisy"}, {"start": 97.14, "end": 98.38, "text": " as it gets."}, {"start": 98.38, "end": 103.46, "text": " These images with the one sample per pixel can be created extremely quickly, in less than"}, {"start": 103.46, "end": 109.22, "text": " 10 milliseconds per image, and this new denoiser also takes around 10 milliseconds to reconstruct"}, {"start": 109.22, "end": 112.06, "text": " the final image from the noisy input."}, {"start": 112.06, "end": 116.46000000000001, "text": " And yes, you heard it right, this is finally a real-time result."}, {"start": 116.46000000000001, "end": 122.42, "text": " This all happens through decoupling the direct and indirect effect of light sources and denoising"}, {"start": 122.42, "end": 123.82000000000001, "text": " them separately."}, {"start": 123.82000000000001, "end": 128.42000000000002, "text": " In the meantime, the algorithm also tries to estimate the amount of noise in different"}, {"start": 128.42000000000002, "end": 133.18, "text": " parts of the image to provide more useful information for the denoising routines."}, {"start": 133.18, "end": 137.94, "text": " The fact that the entire pipeline runs on the graphics card is a great testament to the"}, {"start": 137.94, "end": 140.26, "text": " simplicity of this algorithm."}, {"start": 140.26, "end": 144.45999999999998, "text": " Here you see the term SVGF, you see the results of the new technique."}, {"start": 144.45999999999998, "end": 150.45999999999998, "text": " So we have these noisy input images with one SPP and look at that."}, {"start": 150.45999999999998, "end": 151.45999999999998, "text": " Wow!"}, {"start": 151.45999999999998, "end": 156.89999999999998, "text": " This is one of those papers that looks like magic, and know your own networks or learning"}, {"start": 156.89999999999998, "end": 159.42, "text": " algorithms have been used in this work."}, {"start": 159.42, "end": 165.85999999999999, "text": " Not so long ago, I speculated or more accurately hoped that real-time photorealistic rendering"}, {"start": 165.85999999999999, "end": 168.73999999999998, "text": " would be a possibility during my lifetime."}, {"start": 168.74, "end": 171.9, "text": " In just a few years later, this paper appears."}, {"start": 171.9, "end": 177.06, "text": " We know that the rate of progress in computer graphics research is just staggering, but this"}, {"start": 177.06, "end": 179.70000000000002, "text": " is too much to handle."}, {"start": 179.70000000000002, "end": 183.74, "text": " Super excited to see where the artists will take this, and of course, I'll be here to"}, {"start": 183.74, "end": 185.86, "text": " show you the coolest follow-up works."}, {"start": 185.86, "end": 205.98000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cLC_GHZCOVQ
OpenAI's Bot Beats DOTA World Champion Dendi | Two Minute Papers #180
Some updates and clarifications follow: Update 1: we seem to have conflicting information on the training times - both 24 hours and 2 weeks was mentioned. We'll make sure to address this when the official paper appears. Update 2: more from OpenAI - https://blog.openai.com/more-on-dota-2/ Update 3: more reddit discussion on how to trick the bot into defeat: https://www.reddit.com/r/DotA2/comments/6t8qvs/openai_bots_were_defeated_atleast_50_times/ (thanks to nikre for the link) Update 4: an OpenAI employee provides more clarification on the training process - https://news.ycombinator.com/item?id=15001521 Apologies for the inaccuracies - I've watched every video and interview I could get my hands on and found quite a bit of conflicting information. I'll take this into consideration next time when something comes up without an official research paper. OpenAI's materials on their DOTA bot: https://blog.openai.com/dota-2/ Day9's DOTA learning videos are available here: https://www.youtube.com/playlist?list=PLgmCLtUkEutILNA9EM0BON6ShoQGZhd3P WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Eric Swenson, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image source: https://blog.openai.com/dota-2/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karoijona Efehir. It is time for some minds to be blown. Dota 2 is a multiplayer online battle arena game with a huge cult following and world championship events with a prize pool of over 20 million dollars. In this game, players form two teams and control a hero each and use their strategy and special abilities to defeat the other team. Even AI recently created an AI for this game that is so good that they challenged the best players in the world. Now note that this program is not playing the full feature set of the game, but a version that is limited to one versus one encounters with several other elements of the game disabled. Since lots of strategies involved, we always discuss in these episodes that long term planning is the Achilles heel of these learning algorithms. A small blender in the early game can often snowball out of control by the end of the match and it is hard for the AI and sometimes to even humans to identify these cases. And this game is a huge challenge because unlike chess and go, it has lots of incomplete information and even the simplified one versus one mode involves a reasonable amount of long-term planning. It also involves attacks, trickery and deceiving an opponent and can be imagined as a strategy game that also requires significant technical prowess to pull off the most spectacular moves. This game is also designed in a way that new and unfamiliar situations come up all the time, which require lots of experience and split second decision making to master. This is a true test for any kind of AI. And note that this AI wasn't told anything about the game, not even the rules and was just instructed to try to find a way to win. The algorithm was trained in 24 hours and during this time it not only learned the rules and objectives of the game but it also pulls off remarkable tactics. For instance, other players were very surprised that the bot didn't take the bait, which typically means a smart tactic involving giving up a smaller battle in favor of winning a bigger objective. The AI has a ton of experience playing the game and typically sees through these shenanigans. In this game there are also neutral units that we call creep. When killed they grant precious gold and experience to our opponent so we typically try to deny that. If these units encounter an obstacle they go around it. So players develop the technique by the name Creep Blocking, which is the art of holding them up by the hero character to minimize the distance travelled by them in a unit of time. And the AI has not only learned about the existence of this technique by itself but it also executes it with stunning precision which is quite remarkable. And again during the training phase it had never seen any human play the game and do something like this. The other remarkable thing is that when a player disappears in the darkness, the AI predicts what he could be doing, plans around it and strikes where the player is expected to show up. If you remember, DeepMind's initial go algorithm contained a bootstrapping step where it was fed a large amount of games by players to grasp the basics. The truly remarkable thing is that none of that happened here. The algorithm was trained for only 24 hours and it only played against itself. When it finally played against Dendi, the reigning world champion, the first match was such a treat and I was shocked to see that the AI has outplayed him. In the second game, the player tried to create a situation that he thought the AI hasn't encountered before by giving up some creep to it. The program ruthlessly took advantage of this mistake and defeated him almost immediately. Open AI is bought not only one but apparently also broke the will of Dendi who tapped out after two matches. I feel like someone being hit by a sledgehammer. I didn't even know this was being worked on. This is such a remarkable achievement. Usually the first argument I hear is that of course the AI can play non-stop without bathroom breaks or sleep. While admittedly this is also true for some players, the algorithm was only trained for 24 hours. Note that the steel means a stupendous amount of games played but in terms of training time. That these algorithms typically take from weeks to months to train properly 24 hours is nothing. The second argument that I often hear is that the AI should of course win every time because it has close to zero reaction time and can perform thousands of actions every second. For instance if we would play a game where the goal is to perform the most amount of actions per minute, clearly humans with biological limitations would stand no chance against the computer program. However in this case the number of actions that this algorithm performs in a minute is comparable to that of a human player. This means that these results stem from superior technical abilities and planning and not from the fact that we are talking about the computer. We can look at this result from two different directions. One could be saying well no big deal because this is only a highly limited and hamstrung version of the game which is way less complex than a fully fleshed 5 vs 5 team match. Or two we could say that the algorithm had shown a remarkable aptitude for learning highly sophisticated technical maneuvers and longer term strategy in a difficult game and the rest is only a matter of time. In fact in 5 vs 5 there is even more room for a highly intelligent program to shine and create new tactics that we have never thought of. I would bet that if anything we are going to be even more surprised by the 5 vs 5 results later. We are still lacking in details a bit but I have contacted the open AI guys who noted that there will be more information available in the next few days. Whenever something new appears I'll be here to cover it for you fellow scholars. If you are new to the series and enjoyed this episode make sure to subscribe and click the bell icon for two super fun science videos a week. And if you find yourself interested in Dota 2 and admittedly it's hard not to and would like to catch up a bit on the basics. Make sure to visit Day 9's channel who has a really nice playlist about the fundamentals of the game. There's a link in the description for it, check it out. If you go to his channel make sure to leave him a kind scholarly comment. Let the world see how courteous the two minute papers listeners are. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karoijona Efehir."}, {"start": 4.64, "end": 7.640000000000001, "text": " It is time for some minds to be blown."}, {"start": 7.640000000000001, "end": 14.120000000000001, "text": " Dota 2 is a multiplayer online battle arena game with a huge cult following and world championship"}, {"start": 14.120000000000001, "end": 18.48, "text": " events with a prize pool of over 20 million dollars."}, {"start": 18.48, "end": 24.560000000000002, "text": " In this game, players form two teams and control a hero each and use their strategy and special"}, {"start": 24.560000000000002, "end": 27.2, "text": " abilities to defeat the other team."}, {"start": 27.2, "end": 32.92, "text": " Even AI recently created an AI for this game that is so good that they challenged the"}, {"start": 32.92, "end": 34.8, "text": " best players in the world."}, {"start": 34.8, "end": 39.44, "text": " Now note that this program is not playing the full feature set of the game, but a version"}, {"start": 39.44, "end": 45.44, "text": " that is limited to one versus one encounters with several other elements of the game disabled."}, {"start": 45.44, "end": 51.2, "text": " Since lots of strategies involved, we always discuss in these episodes that long term planning"}, {"start": 51.2, "end": 54.32, "text": " is the Achilles heel of these learning algorithms."}, {"start": 54.32, "end": 59.92, "text": " A small blender in the early game can often snowball out of control by the end of the match"}, {"start": 59.92, "end": 65.88, "text": " and it is hard for the AI and sometimes to even humans to identify these cases."}, {"start": 65.88, "end": 71.2, "text": " And this game is a huge challenge because unlike chess and go, it has lots of incomplete"}, {"start": 71.2, "end": 77.6, "text": " information and even the simplified one versus one mode involves a reasonable amount of long-term"}, {"start": 77.6, "end": 78.6, "text": " planning."}, {"start": 78.6, "end": 84.28, "text": " It also involves attacks, trickery and deceiving an opponent and can be imagined as a strategy"}, {"start": 84.28, "end": 91.0, "text": " game that also requires significant technical prowess to pull off the most spectacular moves."}, {"start": 91.0, "end": 96.36, "text": " This game is also designed in a way that new and unfamiliar situations come up all the"}, {"start": 96.36, "end": 101.72, "text": " time, which require lots of experience and split second decision making to master."}, {"start": 101.72, "end": 104.6, "text": " This is a true test for any kind of AI."}, {"start": 104.6, "end": 110.08, "text": " And note that this AI wasn't told anything about the game, not even the rules and was"}, {"start": 110.08, "end": 113.36, "text": " just instructed to try to find a way to win."}, {"start": 113.36, "end": 119.44, "text": " The algorithm was trained in 24 hours and during this time it not only learned the rules"}, {"start": 119.44, "end": 124.2, "text": " and objectives of the game but it also pulls off remarkable tactics."}, {"start": 124.2, "end": 129.12, "text": " For instance, other players were very surprised that the bot didn't take the bait, which"}, {"start": 129.12, "end": 135.04, "text": " typically means a smart tactic involving giving up a smaller battle in favor of winning"}, {"start": 135.04, "end": 136.36, "text": " a bigger objective."}, {"start": 136.36, "end": 141.72, "text": " The AI has a ton of experience playing the game and typically sees through these shenanigans."}, {"start": 141.72, "end": 145.88, "text": " In this game there are also neutral units that we call creep."}, {"start": 145.88, "end": 151.32, "text": " When killed they grant precious gold and experience to our opponent so we typically try to deny"}, {"start": 151.32, "end": 152.32, "text": " that."}, {"start": 152.32, "end": 155.24, "text": " If these units encounter an obstacle they go around it."}, {"start": 155.24, "end": 160.68, "text": " So players develop the technique by the name Creep Blocking, which is the art of holding"}, {"start": 160.68, "end": 165.24, "text": " them up by the hero character to minimize the distance travelled by them in a unit of"}, {"start": 165.24, "end": 166.24, "text": " time."}, {"start": 166.24, "end": 171.2, "text": " And the AI has not only learned about the existence of this technique by itself but it"}, {"start": 171.2, "end": 175.72, "text": " also executes it with stunning precision which is quite remarkable."}, {"start": 175.72, "end": 180.76, "text": " And again during the training phase it had never seen any human play the game and do"}, {"start": 180.76, "end": 182.2, "text": " something like this."}, {"start": 182.2, "end": 187.79999999999998, "text": " The other remarkable thing is that when a player disappears in the darkness, the AI predicts"}, {"start": 187.79999999999998, "end": 193.0, "text": " what he could be doing, plans around it and strikes where the player is expected to show"}, {"start": 193.0, "end": 194.0, "text": " up."}, {"start": 194.0, "end": 198.72, "text": " If you remember, DeepMind's initial go algorithm contained a bootstrapping step where"}, {"start": 198.72, "end": 203.35999999999999, "text": " it was fed a large amount of games by players to grasp the basics."}, {"start": 203.35999999999999, "end": 206.8, "text": " The truly remarkable thing is that none of that happened here."}, {"start": 206.8, "end": 212.4, "text": " The algorithm was trained for only 24 hours and it only played against itself."}, {"start": 212.4, "end": 217.2, "text": " When it finally played against Dendi, the reigning world champion, the first match was"}, {"start": 217.2, "end": 222.0, "text": " such a treat and I was shocked to see that the AI has outplayed him."}, {"start": 222.0, "end": 227.32, "text": " In the second game, the player tried to create a situation that he thought the AI hasn't"}, {"start": 227.32, "end": 230.51999999999998, "text": " encountered before by giving up some creep to it."}, {"start": 230.51999999999998, "end": 236.4, "text": " The program ruthlessly took advantage of this mistake and defeated him almost immediately."}, {"start": 236.4, "end": 242.23999999999998, "text": " Open AI is bought not only one but apparently also broke the will of Dendi who tapped out"}, {"start": 242.23999999999998, "end": 243.48, "text": " after two matches."}, {"start": 243.48, "end": 270.8, "text": " I feel like someone being hit by a sledgehammer."}, {"start": 270.8, "end": 273.36, "text": " I didn't even know this was being worked on."}, {"start": 273.36, "end": 275.96000000000004, "text": " This is such a remarkable achievement."}, {"start": 275.96000000000004, "end": 281.6, "text": " Usually the first argument I hear is that of course the AI can play non-stop without"}, {"start": 281.6, "end": 283.72, "text": " bathroom breaks or sleep."}, {"start": 283.72, "end": 288.84000000000003, "text": " While admittedly this is also true for some players, the algorithm was only trained for"}, {"start": 288.84000000000003, "end": 290.16, "text": " 24 hours."}, {"start": 290.16, "end": 294.8, "text": " Note that the steel means a stupendous amount of games played but in terms of training"}, {"start": 294.8, "end": 295.8, "text": " time."}, {"start": 295.8, "end": 301.16, "text": " That these algorithms typically take from weeks to months to train properly 24 hours is"}, {"start": 301.16, "end": 302.16, "text": " nothing."}, {"start": 302.16, "end": 307.64, "text": " The second argument that I often hear is that the AI should of course win every time because"}, {"start": 307.64, "end": 313.92, "text": " it has close to zero reaction time and can perform thousands of actions every second."}, {"start": 313.92, "end": 319.08000000000004, "text": " For instance if we would play a game where the goal is to perform the most amount of actions"}, {"start": 319.08000000000004, "end": 324.44, "text": " per minute, clearly humans with biological limitations would stand no chance against"}, {"start": 324.44, "end": 325.92, "text": " the computer program."}, {"start": 325.92, "end": 330.84, "text": " However in this case the number of actions that this algorithm performs in a minute is"}, {"start": 330.84, "end": 333.68, "text": " comparable to that of a human player."}, {"start": 333.68, "end": 338.68, "text": " This means that these results stem from superior technical abilities and planning and not"}, {"start": 338.68, "end": 341.6, "text": " from the fact that we are talking about the computer."}, {"start": 341.6, "end": 344.84, "text": " We can look at this result from two different directions."}, {"start": 344.84, "end": 350.56, "text": " One could be saying well no big deal because this is only a highly limited and hamstrung"}, {"start": 350.56, "end": 356.6, "text": " version of the game which is way less complex than a fully fleshed 5 vs 5 team match."}, {"start": 356.6, "end": 362.4, "text": " Or two we could say that the algorithm had shown a remarkable aptitude for learning highly"}, {"start": 362.4, "end": 368.32, "text": " sophisticated technical maneuvers and longer term strategy in a difficult game and the rest"}, {"start": 368.32, "end": 370.12, "text": " is only a matter of time."}, {"start": 370.12, "end": 376.28, "text": " In fact in 5 vs 5 there is even more room for a highly intelligent program to shine and"}, {"start": 376.28, "end": 379.12, "text": " create new tactics that we have never thought of."}, {"start": 379.12, "end": 384.56, "text": " I would bet that if anything we are going to be even more surprised by the 5 vs 5 results"}, {"start": 384.56, "end": 385.56, "text": " later."}, {"start": 385.56, "end": 390.24, "text": " We are still lacking in details a bit but I have contacted the open AI guys who noted"}, {"start": 390.24, "end": 394.0, "text": " that there will be more information available in the next few days."}, {"start": 394.0, "end": 398.36, "text": " Whenever something new appears I'll be here to cover it for you fellow scholars."}, {"start": 398.36, "end": 402.52, "text": " If you are new to the series and enjoyed this episode make sure to subscribe and click"}, {"start": 402.52, "end": 406.36, "text": " the bell icon for two super fun science videos a week."}, {"start": 406.36, "end": 411.72, "text": " And if you find yourself interested in Dota 2 and admittedly it's hard not to and would"}, {"start": 411.72, "end": 414.2, "text": " like to catch up a bit on the basics."}, {"start": 414.2, "end": 418.96000000000004, "text": " Make sure to visit Day 9's channel who has a really nice playlist about the fundamentals"}, {"start": 418.96000000000004, "end": 419.96000000000004, "text": " of the game."}, {"start": 419.96000000000004, "end": 422.56, "text": " There's a link in the description for it, check it out."}, {"start": 422.56, "end": 426.52000000000004, "text": " If you go to his channel make sure to leave him a kind scholarly comment."}, {"start": 426.52000000000004, "end": 430.16, "text": " Let the world see how courteous the two minute papers listeners are."}, {"start": 430.16, "end": 437.16, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_DN2rzHkpZE
Verifying Mission-Critical AI Programs | Two Minute Papers #179
The paper "Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks" is available here: https://arxiv.org/pdf/1702.01135.pdf Out Patreon page: https://www.patreon.com/TwoMinutePapers Earlier episodes that were showcased: pix2pix - https://www.youtube.com/watch?v=u7kQ5lNfUfg Breaking DeepMind's Game AI System - https://www.youtube.com/watch?v=QFu0vZgMcqk WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Eric Swenson, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2072618/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. This paper does not contain the usual fireworks that you are used to in two-minute papers, but I feel that this is a very important story that needs to be told to everyone. In computer science, we encounter many interesting problems, like finding the shortest path between two given streets in a city or measuring the stability of a bridge. Up until a few years ago, these were almost exclusively solved by traditional handcrafted techniques. This means a class of techniques that were designed by hand by scientists and are often specific to the problem we have at hand. Different problem, different algorithm. And fast forward to a few years ago, we witnessed an amazing resurgence of neural networks and learning algorithms. Many problems that were previously thought to be unsolvable crumbled quickly one after another. Now it is clear that the age of AI is coming, and clearly there are possible applications of it that we need to be very cautious with. Since we design these traditional techniques by hand, the failure cases are often known because these algorithms are simple enough that we can look under the hood and make reasonable assumptions. This is not the case with deep neural networks. We know that in some cases neural networks are unreliable, but it is remarkably hard to identify these failure cases. For instance, earlier we talked about this technique by the name PIX2PIX, where we could make a crude drawing of a cat and it would translate it to a real image. It works spectacularly in many cases, but Twitter was also full of examples with really amusing failure cases. Beyond the unreliability, we have a much bigger problem, and that problem is adversarial examples. In an earlier episode, we discussed an adversarial algorithm, wherein in an amusing example, they added a tiny bit of barely perceptible noise to this image to make the deep neural network misidentify a bus for an ostrich. We can even train a new neural network that is specifically tailored to break the one we have, opening up the possibility of targeted attacks against it. To alleviate this problem, it is always a good idea to make sure that these neural networks are also trained on adversarial inputs as well. But how do we know how many possible other adversarial examples exist that we haven't found yet? The paper discusses a way of verifying important properties of neural networks. For instance, it can measure the adversarial robustness of such a network, and this is super useful, because it gives us information whether there are possible forged inputs that could break our learning systems. The paper also contains a nice little experiment with airborne collision avoidance systems. The goal here is avoiding mid-air collisions between commercial aircrafts while minimizing the number of alerts. As a small-scale thought experiment, we can train a neural network to replace an existing system, but in this case, such a neural network would have to be verified, and it is now finally a possibility. Now, make no mistake, this does not mean that there are any sort of aircraft safety systems deployed in the industry that are relying on neural networks. No, no, absolutely not. This is a small scale, what if kind of experiment that may prove to be a first step towards something really exciting? This is one of those incredible papers that even without the usual visual fireworks makes me feel that I am a part of the future. This is a step towards the future where we can prove that a learning algorithm is guaranteed to work in mission-critical systems. I would also like to note that even if this episode is not meant to go viral on the internet, it is still an important story to be told. Normally, creating videos like this would be a financial suicide, but we are not hurt by this at all because we get stable support from you on Patreon. And that's what it is all about. Worrying less about views and spending more time talking about what's really important. Absolutely amazing. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.72, "end": 9.68, "text": " This paper does not contain the usual fireworks that you are used to in two-minute papers,"}, {"start": 9.68, "end": 14.66, "text": " but I feel that this is a very important story that needs to be told to everyone."}, {"start": 14.66, "end": 20.04, "text": " In computer science, we encounter many interesting problems, like finding the shortest path"}, {"start": 20.04, "end": 24.64, "text": " between two given streets in a city or measuring the stability of a bridge."}, {"start": 24.64, "end": 30.52, "text": " Up until a few years ago, these were almost exclusively solved by traditional handcrafted"}, {"start": 30.52, "end": 31.52, "text": " techniques."}, {"start": 31.52, "end": 36.24, "text": " This means a class of techniques that were designed by hand by scientists and are often"}, {"start": 36.24, "end": 38.92, "text": " specific to the problem we have at hand."}, {"start": 38.92, "end": 41.32, "text": " Different problem, different algorithm."}, {"start": 41.32, "end": 47.08, "text": " And fast forward to a few years ago, we witnessed an amazing resurgence of neural networks"}, {"start": 47.08, "end": 48.8, "text": " and learning algorithms."}, {"start": 48.8, "end": 53.92, "text": " Many problems that were previously thought to be unsolvable crumbled quickly one after"}, {"start": 53.92, "end": 54.92, "text": " another."}, {"start": 54.92, "end": 60.04, "text": " Now it is clear that the age of AI is coming, and clearly there are possible applications"}, {"start": 60.04, "end": 62.88, "text": " of it that we need to be very cautious with."}, {"start": 62.88, "end": 67.92, "text": " Since we design these traditional techniques by hand, the failure cases are often known"}, {"start": 67.92, "end": 72.8, "text": " because these algorithms are simple enough that we can look under the hood and make reasonable"}, {"start": 72.8, "end": 73.8, "text": " assumptions."}, {"start": 73.8, "end": 76.36, "text": " This is not the case with deep neural networks."}, {"start": 76.36, "end": 81.8, "text": " We know that in some cases neural networks are unreliable, but it is remarkably hard to"}, {"start": 81.8, "end": 84.24, "text": " identify these failure cases."}, {"start": 84.24, "end": 88.96, "text": " For instance, earlier we talked about this technique by the name PIX2PIX, where we could"}, {"start": 88.96, "end": 93.96, "text": " make a crude drawing of a cat and it would translate it to a real image."}, {"start": 93.96, "end": 99.44, "text": " It works spectacularly in many cases, but Twitter was also full of examples with really"}, {"start": 99.44, "end": 101.44, "text": " amusing failure cases."}, {"start": 101.44, "end": 106.24, "text": " Beyond the unreliability, we have a much bigger problem, and that problem is adversarial"}, {"start": 106.24, "end": 107.24, "text": " examples."}, {"start": 107.24, "end": 112.52, "text": " In an earlier episode, we discussed an adversarial algorithm, wherein in an amusing example, they"}, {"start": 112.52, "end": 117.91999999999999, "text": " added a tiny bit of barely perceptible noise to this image to make the deep neural network"}, {"start": 117.91999999999999, "end": 120.88, "text": " misidentify a bus for an ostrich."}, {"start": 120.88, "end": 126.08, "text": " We can even train a new neural network that is specifically tailored to break the one we"}, {"start": 126.08, "end": 130.44, "text": " have, opening up the possibility of targeted attacks against it."}, {"start": 130.44, "end": 135.56, "text": " To alleviate this problem, it is always a good idea to make sure that these neural networks"}, {"start": 135.56, "end": 138.72, "text": " are also trained on adversarial inputs as well."}, {"start": 138.72, "end": 143.96, "text": " But how do we know how many possible other adversarial examples exist that we haven't"}, {"start": 143.96, "end": 144.96, "text": " found yet?"}, {"start": 144.96, "end": 150.12, "text": " The paper discusses a way of verifying important properties of neural networks."}, {"start": 150.12, "end": 155.52, "text": " For instance, it can measure the adversarial robustness of such a network, and this is super"}, {"start": 155.52, "end": 160.72, "text": " useful, because it gives us information whether there are possible forged inputs that could"}, {"start": 160.72, "end": 162.6, "text": " break our learning systems."}, {"start": 162.6, "end": 168.04, "text": " The paper also contains a nice little experiment with airborne collision avoidance systems."}, {"start": 168.04, "end": 173.16, "text": " The goal here is avoiding mid-air collisions between commercial aircrafts while minimizing"}, {"start": 173.16, "end": 174.79999999999998, "text": " the number of alerts."}, {"start": 174.79999999999998, "end": 179.72, "text": " As a small-scale thought experiment, we can train a neural network to replace an existing"}, {"start": 179.72, "end": 184.64, "text": " system, but in this case, such a neural network would have to be verified, and it is now"}, {"start": 184.64, "end": 186.95999999999998, "text": " finally a possibility."}, {"start": 186.95999999999998, "end": 192.28, "text": " Now, make no mistake, this does not mean that there are any sort of aircraft safety systems"}, {"start": 192.28, "end": 195.96, "text": " deployed in the industry that are relying on neural networks."}, {"start": 195.96, "end": 198.04, "text": " No, no, absolutely not."}, {"start": 198.04, "end": 204.04, "text": " This is a small scale, what if kind of experiment that may prove to be a first step towards something"}, {"start": 204.04, "end": 205.36, "text": " really exciting?"}, {"start": 205.36, "end": 210.32, "text": " This is one of those incredible papers that even without the usual visual fireworks makes"}, {"start": 210.32, "end": 212.8, "text": " me feel that I am a part of the future."}, {"start": 212.8, "end": 217.76, "text": " This is a step towards the future where we can prove that a learning algorithm is guaranteed"}, {"start": 217.76, "end": 220.2, "text": " to work in mission-critical systems."}, {"start": 220.2, "end": 225.07999999999998, "text": " I would also like to note that even if this episode is not meant to go viral on the internet,"}, {"start": 225.07999999999998, "end": 228.04, "text": " it is still an important story to be told."}, {"start": 228.04, "end": 232.72, "text": " Normally, creating videos like this would be a financial suicide, but we are not hurt"}, {"start": 232.72, "end": 236.95999999999998, "text": " by this at all because we get stable support from you on Patreon."}, {"start": 236.95999999999998, "end": 238.72, "text": " And that's what it is all about."}, {"start": 238.72, "end": 243.39999999999998, "text": " Worrying less about views and spending more time talking about what's really important."}, {"start": 243.39999999999998, "end": 244.67999999999998, "text": " Absolutely amazing."}, {"start": 244.68, "end": 251.68, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=xp-YOPcjkFw
DeepMind's AI Learns Imagination-Based Planning | Two Minute Papers #178
The paper "Imagination-Augmented Agents for Deep Reinforcement Learning" is available here: https://arxiv.org/abs/1707.06203 Out Patreon page with the details: https://www.patreon.com/TwoMinutePapers WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Eric Swenson, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-767781/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ijona Ifeher. A bit more than two years ago, the Deep Mind Guys implemented an algorithm that could play Atari Breakout on a superhuman level by looking at the video feed that you see here. And the news immediately took the world by storm. This original paper is a bit more than two years old and has already been referenced in well over a thousand other research papers. That is one powerful paper. This algorithm was based on a combination of a neural network and reinforcement learning. The neural network was used to understand the video feed and reinforcement learning is there to come up with the appropriate actions. This is the part that plays the game. Reinforcement learning is very suitable for tasks where we are in a changing environment and we need to choose an appropriate action based on our surroundings to maximize some sort of score. This score can be, for instance, how far we've gotten in a labyrinth or how many collisions we have avoided with the helicopter or any sort of score that reflects how well we are currently doing. And this algorithm works similarly to how an animal learns new things. It observes the environment, tries different actions and sees if they worked well. If yes, it will keep doing that. If not, well, let's try something else. Pavlov's dog with the bell is an excellent example of that. There are many existing works in this area and it performs remarkably well for a number of problems and computer games, but only if the reward comes relatively quickly after the action. For instance, in breakout, if we miss the ball, we lose a life immediately, but if we hit it, we'll almost immediately break some breaks and increase our score. This is more than suitable for a well-built reinforcement learner algorithm. However, this earlier work didn't perform well on any games that required long-term planning. If Pavlov gave his dog a treat for something that it did two days ago, the animal would have no clue as to which action led to this tasty reward. And this work subject is a game where we control this green character and our goal is to push the boxes onto the red dots. This game is particularly difficult, not only for algorithms, but even humans because of two important reasons. One, it requires long-term planning, which, as we know, is a huge issue for reinforcement learning algorithms. Just because a box is next to a dot doesn't mean that it is the one that belongs there. This is a particularly nasty property of the game. And two, some mistakes we make are irreversible. For instance, pushing a box in a corner can make it impossible to complete the level. If we have an algorithm that tries a bunch of actions and sees if they stick, well, that's not going to work here. It is now hopefully easy to see that this is an obscenely difficult problem. And the deep-mind guys just came up with imagination-augmented agents as a solution for it. So what is behind this really cool name? The interesting part about this novel architecture is that it uses imagination, which is a routine to cook up not only one action, but entire plans consisting of several steps, and finally, choose one that has the greatest expected reward over the long term. It takes information about the present and imagines possible futures and chooses the one with the most handsome reward. And as you can see, this is only the first paper on this new architecture and it can already solve a problem with seven boxes. This is just unreal, absolutely amazing work. And please note that this is a fairly general algorithm that can be used for a number of different problems. This particular game was just one way of demonstrating the attractive properties of this new technique. The paper contains more results and is a great read. Make sure to have a look. Also, if you've enjoyed this episode, please consider supporting two-minute papers on Patreon. Details are available in the video description. Have a look. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ijona Ifeher."}, {"start": 4.72, "end": 9.72, "text": " A bit more than two years ago, the Deep Mind Guys implemented an algorithm that could play"}, {"start": 9.72, "end": 15.16, "text": " Atari Breakout on a superhuman level by looking at the video feed that you see here."}, {"start": 15.16, "end": 17.96, "text": " And the news immediately took the world by storm."}, {"start": 17.96, "end": 22.72, "text": " This original paper is a bit more than two years old and has already been referenced in"}, {"start": 22.72, "end": 25.76, "text": " well over a thousand other research papers."}, {"start": 25.76, "end": 27.68, "text": " That is one powerful paper."}, {"start": 27.68, "end": 32.84, "text": " This algorithm was based on a combination of a neural network and reinforcement learning."}, {"start": 32.84, "end": 37.24, "text": " The neural network was used to understand the video feed and reinforcement learning is"}, {"start": 37.24, "end": 40.12, "text": " there to come up with the appropriate actions."}, {"start": 40.12, "end": 42.4, "text": " This is the part that plays the game."}, {"start": 42.4, "end": 47.239999999999995, "text": " Reinforcement learning is very suitable for tasks where we are in a changing environment"}, {"start": 47.239999999999995, "end": 52.480000000000004, "text": " and we need to choose an appropriate action based on our surroundings to maximize some"}, {"start": 52.480000000000004, "end": 53.92, "text": " sort of score."}, {"start": 53.92, "end": 59.160000000000004, "text": " This score can be, for instance, how far we've gotten in a labyrinth or how many collisions"}, {"start": 59.160000000000004, "end": 64.4, "text": " we have avoided with the helicopter or any sort of score that reflects how well we are currently"}, {"start": 64.4, "end": 65.4, "text": " doing."}, {"start": 65.4, "end": 69.52, "text": " And this algorithm works similarly to how an animal learns new things."}, {"start": 69.52, "end": 74.6, "text": " It observes the environment, tries different actions and sees if they worked well."}, {"start": 74.6, "end": 76.72, "text": " If yes, it will keep doing that."}, {"start": 76.72, "end": 79.4, "text": " If not, well, let's try something else."}, {"start": 79.4, "end": 82.76, "text": " Pavlov's dog with the bell is an excellent example of that."}, {"start": 82.76, "end": 87.80000000000001, "text": " There are many existing works in this area and it performs remarkably well for a number"}, {"start": 87.80000000000001, "end": 93.88000000000001, "text": " of problems and computer games, but only if the reward comes relatively quickly after"}, {"start": 93.88000000000001, "end": 94.88000000000001, "text": " the action."}, {"start": 94.88000000000001, "end": 99.52000000000001, "text": " For instance, in breakout, if we miss the ball, we lose a life immediately, but if we hit"}, {"start": 99.52000000000001, "end": 103.68, "text": " it, we'll almost immediately break some breaks and increase our score."}, {"start": 103.68, "end": 107.76, "text": " This is more than suitable for a well-built reinforcement learner algorithm."}, {"start": 107.76, "end": 114.12, "text": " However, this earlier work didn't perform well on any games that required long-term planning."}, {"start": 114.12, "end": 119.24000000000001, "text": " If Pavlov gave his dog a treat for something that it did two days ago, the animal would"}, {"start": 119.24000000000001, "end": 123.4, "text": " have no clue as to which action led to this tasty reward."}, {"start": 123.4, "end": 128.52, "text": " And this work subject is a game where we control this green character and our goal is to"}, {"start": 128.52, "end": 130.96, "text": " push the boxes onto the red dots."}, {"start": 130.96, "end": 135.96, "text": " This game is particularly difficult, not only for algorithms, but even humans because"}, {"start": 135.96, "end": 138.04000000000002, "text": " of two important reasons."}, {"start": 138.04000000000002, "end": 143.32000000000002, "text": " One, it requires long-term planning, which, as we know, is a huge issue for reinforcement"}, {"start": 143.32000000000002, "end": 144.96, "text": " learning algorithms."}, {"start": 144.96, "end": 149.72, "text": " Just because a box is next to a dot doesn't mean that it is the one that belongs there."}, {"start": 149.72, "end": 152.88, "text": " This is a particularly nasty property of the game."}, {"start": 152.88, "end": 156.32, "text": " And two, some mistakes we make are irreversible."}, {"start": 156.32, "end": 161.24, "text": " For instance, pushing a box in a corner can make it impossible to complete the level."}, {"start": 161.24, "end": 166.44, "text": " If we have an algorithm that tries a bunch of actions and sees if they stick, well, that's"}, {"start": 166.44, "end": 168.0, "text": " not going to work here."}, {"start": 168.0, "end": 172.4, "text": " It is now hopefully easy to see that this is an obscenely difficult problem."}, {"start": 172.4, "end": 178.20000000000002, "text": " And the deep-mind guys just came up with imagination-augmented agents as a solution for it."}, {"start": 178.20000000000002, "end": 180.72, "text": " So what is behind this really cool name?"}, {"start": 180.72, "end": 186.08, "text": " The interesting part about this novel architecture is that it uses imagination, which is a routine"}, {"start": 186.08, "end": 192.88000000000002, "text": " to cook up not only one action, but entire plans consisting of several steps, and finally,"}, {"start": 192.88000000000002, "end": 197.0, "text": " choose one that has the greatest expected reward over the long term."}, {"start": 197.0, "end": 202.4, "text": " It takes information about the present and imagines possible futures and chooses the one with"}, {"start": 202.4, "end": 204.12, "text": " the most handsome reward."}, {"start": 204.12, "end": 209.0, "text": " And as you can see, this is only the first paper on this new architecture and it can already"}, {"start": 209.0, "end": 211.64000000000001, "text": " solve a problem with seven boxes."}, {"start": 211.64000000000001, "end": 215.08, "text": " This is just unreal, absolutely amazing work."}, {"start": 215.08, "end": 219.68, "text": " And please note that this is a fairly general algorithm that can be used for a number of"}, {"start": 219.68, "end": 220.84, "text": " different problems."}, {"start": 220.84, "end": 225.44, "text": " This particular game was just one way of demonstrating the attractive properties of this"}, {"start": 225.44, "end": 226.52, "text": " new technique."}, {"start": 226.52, "end": 229.44, "text": " The paper contains more results and is a great read."}, {"start": 229.44, "end": 230.44, "text": " Make sure to have a look."}, {"start": 230.44, "end": 236.0, "text": " Also, if you've enjoyed this episode, please consider supporting two-minute papers on Patreon."}, {"start": 236.0, "end": 238.16000000000003, "text": " Details are available in the video description."}, {"start": 238.16000000000003, "end": 239.16000000000003, "text": " Have a look."}, {"start": 239.16, "end": 254.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=vmkqFRyNUWo
AI Learns Semantic Style Transfer | Two Minute Papers #177
The paper "Visual Attribute Transfer through Deep Image Analogy" and its source code is available here: https://arxiv.org/pdf/1705.01088.pdf https://github.com/msracver/Deep-Image-Analogy WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1895653/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Efehir. Style Transfer is an amazing area in machine learning and AI research where we take two images. Image number one is an input photograph and Image number two is the desired style. And the output of this process is the content of Image number one with the style of Image number two. This first paper opened up an incredible new area of research. As a result, a ton of different variants have emerged in the last two years. Feet forward style transfer for close to real-time results, temporal leak adherence style transfer for videos and much much more. And this one not only outperforms previously existing techniques but also broadens the horizon of possible style transfer applications. And obviously, a human would be best at doing this because a human has an understanding of the objects seen in these images. And now, hold on to your papers because the main objective of this method is to create semantically meaningful results for style transfer. It is meant to do well with input image pairs that may look completely different visually but have some semantic components that are similar. For instance, a photograph of a human face and the drawing of a virtual character is an excellent example of that. In this case, this learning algorithm recognizes that they both have noses and uses this valuable information in the style transfer process. As a result, it has three super cool applications. First, the regular photo to style transfer that we all know and love. Second, it is also capable of swapping the style of two input images. Third, and hold on to your papers because this is going to be even more insane. Style or sketch to photo. And we have a plus one here as well, so fourth, it also supports color transfer between photographs, which will allow creating amazing time lapse videos. I always try to lure you fellow scholars into looking at these papers, so make sure to have a look at the paper for some more results on this. And you can see here that this method was compared to several other techniques. For instance, you can see the cycle consistency paper and patch match. And this is one of those moments when I get super happy because more than 170 episodes into the series, we can not only appreciate the quality of these new results, but we also had previous episodes about both of these algorithms. As always, the links are available in the video description. Make sure to have a look, it's going to be a lot of fun. The source code of this project is also available. We also have a ton of episodes on computer graphics in the series. Make sure to have a look at those as well. Every now and then, I get emails from viewers who say that they came for the AI videos. In just in case, watched a recent episode on computer graphics and were completely hooked. Give it a try. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Efehir."}, {"start": 4.6000000000000005, "end": 11.0, "text": " Style Transfer is an amazing area in machine learning and AI research where we take two images."}, {"start": 11.0, "end": 16.8, "text": " Image number one is an input photograph and Image number two is the desired style."}, {"start": 16.8, "end": 23.3, "text": " And the output of this process is the content of Image number one with the style of Image number two."}, {"start": 23.3, "end": 27.5, "text": " This first paper opened up an incredible new area of research."}, {"start": 27.5, "end": 32.1, "text": " As a result, a ton of different variants have emerged in the last two years."}, {"start": 32.1, "end": 39.8, "text": " Feet forward style transfer for close to real-time results, temporal leak adherence style transfer for videos and much much more."}, {"start": 39.8, "end": 48.2, "text": " And this one not only outperforms previously existing techniques but also broadens the horizon of possible style transfer applications."}, {"start": 48.2, "end": 56.2, "text": " And obviously, a human would be best at doing this because a human has an understanding of the objects seen in these images."}, {"start": 56.2, "end": 65.0, "text": " And now, hold on to your papers because the main objective of this method is to create semantically meaningful results for style transfer."}, {"start": 65.0, "end": 73.7, "text": " It is meant to do well with input image pairs that may look completely different visually but have some semantic components that are similar."}, {"start": 73.7, "end": 80.9, "text": " For instance, a photograph of a human face and the drawing of a virtual character is an excellent example of that."}, {"start": 80.9, "end": 89.30000000000001, "text": " In this case, this learning algorithm recognizes that they both have noses and uses this valuable information in the style transfer process."}, {"start": 89.30000000000001, "end": 92.80000000000001, "text": " As a result, it has three super cool applications."}, {"start": 92.80000000000001, "end": 97.4, "text": " First, the regular photo to style transfer that we all know and love."}, {"start": 97.4, "end": 109.4, "text": " Second, it is also capable of swapping the style of two input images."}, {"start": 109.4, "end": 117.9, "text": " Third, and hold on to your papers because this is going to be even more insane."}, {"start": 117.9, "end": 120.9, "text": " Style or sketch to photo."}, {"start": 120.9, "end": 133.4, "text": " And we have a plus one here as well, so fourth, it also supports color transfer between photographs, which will allow creating amazing time lapse videos."}, {"start": 133.4, "end": 141.4, "text": " I always try to lure you fellow scholars into looking at these papers, so make sure to have a look at the paper for some more results on this."}, {"start": 141.4, "end": 145.9, "text": " And you can see here that this method was compared to several other techniques."}, {"start": 145.9, "end": 150.4, "text": " For instance, you can see the cycle consistency paper and patch match."}, {"start": 150.4, "end": 164.4, "text": " And this is one of those moments when I get super happy because more than 170 episodes into the series, we can not only appreciate the quality of these new results, but we also had previous episodes about both of these algorithms."}, {"start": 164.4, "end": 167.4, "text": " As always, the links are available in the video description."}, {"start": 167.4, "end": 170.20000000000002, "text": " Make sure to have a look, it's going to be a lot of fun."}, {"start": 170.20000000000002, "end": 172.9, "text": " The source code of this project is also available."}, {"start": 172.9, "end": 176.9, "text": " We also have a ton of episodes on computer graphics in the series."}, {"start": 176.9, "end": 178.8, "text": " Make sure to have a look at those as well."}, {"start": 178.8, "end": 184.3, "text": " Every now and then, I get emails from viewers who say that they came for the AI videos."}, {"start": 184.3, "end": 189.3, "text": " In just in case, watched a recent episode on computer graphics and were completely hooked."}, {"start": 189.3, "end": 190.3, "text": " Give it a try."}, {"start": 190.3, "end": 212.3, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=RygQnpQMdPI
Elastoplastic Hair and Cloth Simulations | Two Minute Papers #176
The paper "Anisotropic Elastoplasticity for Cloth, Knit and Hair Frictional Contact" is available here: http://www.math.ucla.edu/~jteran/papers/JGT17.pdf http://dl.acm.org/citation.cfm?id=3073623 Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-791886/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifei. This is a piece of elastic cloth modeled from more than a million tiny triangles and its interaction with seven million colored grains of sand. This is super challenging because of two things. One, we have to compute the elastic deformations when these millions of tiny elements collide and two, all this while maintaining two-way coupling. This means that the cloth has an effect on the sand, but the effect of the sand is also simulated on the cloth. In elastic deformations, there are potential interactions between distant parts of the same material and self-collisions may also occur. Previous state-of-the-art techniques were either lacking in these self-collision effects, or the ones that were able to process that also included the fracturing of the material. With this novel work, it is possible to simulate both elastic deformations as you can see here, but it also supports simulating plasticity as you can see here with the cloth pieces sliding off of each other. Beautiful. This new technique also supports simulating a variety of different types of materials, knitted cloth ponchos, shag carpets, twisting cloth, hair, tearing fiber, and more. And it does all this with a typical execution time between 10 to 90 seconds per frame. In these black screens, you see the timing information and the number of particles and triangles used in these simulations. And you will see that there are many scenes where millions of triangles and particles are processed in very little time. It is very rare that we can implement only one technique that takes care of so many kinds of interactions while still obtaining results very quickly. This is insanity. This paper is absolutely top-tier bank for the buck, and I am really excited to see some more elastic plastic simulations in all kinds of digital media in the future. You know our motto, a couple more papers down the line and having something like this in real-time applications may become a reality. Really cool. If you enjoyed this episode and you feel that 8 of these videos a month is worth a dollar, please consider supporting us on Patreon. One dollar per month really doesn't break the bank, but it is a great deal of help for us in keeping the series going. And your support has always been absolutely amazing, and I am so grateful to have so many devoted fellow scholars like you in our ranks. Details are in the video description. Have a look. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifei."}, {"start": 4.28, "end": 9.92, "text": " This is a piece of elastic cloth modeled from more than a million tiny triangles and its"}, {"start": 9.92, "end": 13.88, "text": " interaction with seven million colored grains of sand."}, {"start": 13.88, "end": 17.0, "text": " This is super challenging because of two things."}, {"start": 17.0, "end": 22.44, "text": " One, we have to compute the elastic deformations when these millions of tiny elements collide"}, {"start": 22.44, "end": 26.36, "text": " and two, all this while maintaining two-way coupling."}, {"start": 26.36, "end": 31.08, "text": " This means that the cloth has an effect on the sand, but the effect of the sand is also"}, {"start": 31.08, "end": 32.88, "text": " simulated on the cloth."}, {"start": 32.88, "end": 37.24, "text": " In elastic deformations, there are potential interactions between distant parts of the"}, {"start": 37.24, "end": 40.76, "text": " same material and self-collisions may also occur."}, {"start": 40.76, "end": 45.480000000000004, "text": " Previous state-of-the-art techniques were either lacking in these self-collision effects,"}, {"start": 45.480000000000004, "end": 51.120000000000005, "text": " or the ones that were able to process that also included the fracturing of the material."}, {"start": 51.120000000000005, "end": 56.16, "text": " With this novel work, it is possible to simulate both elastic deformations as you can see"}, {"start": 56.16, "end": 62.879999999999995, "text": " here, but it also supports simulating plasticity as you can see here with the cloth pieces sliding"}, {"start": 62.879999999999995, "end": 64.52, "text": " off of each other."}, {"start": 64.52, "end": 65.52, "text": " Beautiful."}, {"start": 65.52, "end": 70.47999999999999, "text": " This new technique also supports simulating a variety of different types of materials,"}, {"start": 70.47999999999999, "end": 76.75999999999999, "text": " knitted cloth ponchos, shag carpets, twisting cloth, hair, tearing fiber, and more."}, {"start": 76.75999999999999, "end": 82.88, "text": " And it does all this with a typical execution time between 10 to 90 seconds per frame."}, {"start": 82.88, "end": 87.24, "text": " In these black screens, you see the timing information and the number of particles and"}, {"start": 87.24, "end": 89.72, "text": " triangles used in these simulations."}, {"start": 89.72, "end": 94.24, "text": " And you will see that there are many scenes where millions of triangles and particles are"}, {"start": 94.24, "end": 96.64, "text": " processed in very little time."}, {"start": 96.64, "end": 101.44, "text": " It is very rare that we can implement only one technique that takes care of so many"}, {"start": 101.44, "end": 105.72, "text": " kinds of interactions while still obtaining results very quickly."}, {"start": 105.72, "end": 107.03999999999999, "text": " This is insanity."}, {"start": 107.03999999999999, "end": 111.91999999999999, "text": " This paper is absolutely top-tier bank for the buck, and I am really excited to see some"}, {"start": 111.92, "end": 116.76, "text": " more elastic plastic simulations in all kinds of digital media in the future."}, {"start": 116.76, "end": 121.24000000000001, "text": " You know our motto, a couple more papers down the line and having something like this in"}, {"start": 121.24000000000001, "end": 124.04, "text": " real-time applications may become a reality."}, {"start": 124.04, "end": 125.04, "text": " Really cool."}, {"start": 125.04, "end": 129.88, "text": " If you enjoyed this episode and you feel that 8 of these videos a month is worth a dollar,"}, {"start": 129.88, "end": 132.48, "text": " please consider supporting us on Patreon."}, {"start": 132.48, "end": 136.72, "text": " One dollar per month really doesn't break the bank, but it is a great deal of help for"}, {"start": 136.72, "end": 138.72, "text": " us in keeping the series going."}, {"start": 138.72, "end": 143.92, "text": " And your support has always been absolutely amazing, and I am so grateful to have so many"}, {"start": 143.92, "end": 146.72, "text": " devoted fellow scholars like you in our ranks."}, {"start": 146.72, "end": 148.72, "text": " Details are in the video description."}, {"start": 148.72, "end": 149.72, "text": " Have a look."}, {"start": 149.72, "end": 169.72, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6c2T2cykE_A
Animating Elastic Rods With Sound | Two Minute Papers #175
The paper "Animating Elastic Rods with Sound" is available here: https://www.cs.cornell.edu/projects/rodsound/ Watch the original video with the sound samples here: https://www.youtube.com/watch?v=ePySSLiyghs WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil, VR Wizard. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1681565/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. In this series, we talk a lot about photorealistic rendering and making sure that the appearance of our virtual objects is simulated properly. A lot of works on how things look. However, in order to create a more complete sensorial experience, we also have to simulate how these things sound. And today, we are going to have a look at a really cool piece of work that simulates the sound of virtual elastic rods made of aluminum, steel, oak tree and rubber. And of course, before you ask, this also means that there will be sound simulations of everyone's favorite toy, the Walking Slinky. As for all papers that have anything to do with sound synthesis, I recommend using a pair of headphones for this episode. The sound emerging from these elastic rods is particularly difficult to simulate because of the fact that sound frequencies vary quite a bit over time and the objects themselves are also in motion and subject to deformations during the simulation. And as you will see with the Slinky, we potentially have tens of thousands of contact events in the meantime. Let's have a look at some results. For the fellow scholars who are worried about the validity of these Star Wars sounds, I know you're out there, make sure to watch the video until the end. The authors of the paper proposed a dipole model to create these simulations. Dipoles are typically used to approximate electric and magnetic fields in physics and in this case it is really amazing to see an application of it for sound synthesis. For instance, in most cases, these sound waves are typically symmetric around 2D cross sections of these objects which can be described by a dipole model quite well. Also, it is computationally quite effective and can eliminate these lengthy pre-computation steps that are typically present in previous techniques. There are also comparisons against the state of the art and we can hear how much richer the sound of this new technique is. And as you know all too well, I love all papers that have something to do with the real world around us. And the reason for this is that we can try the very best kind of validation for these algorithms and this is when we let reality be our judge. Some frequency plots are also available to validate the output of the algorithm against the real world sound samples from the lab. It is really amazing to see that we can use science to breathe more life in our virtual worlds. Watch in and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.5600000000000005, "end": 10.040000000000001, "text": " In this series, we talk a lot about photorealistic rendering and making sure that the appearance"}, {"start": 10.040000000000001, "end": 13.16, "text": " of our virtual objects is simulated properly."}, {"start": 13.16, "end": 15.32, "text": " A lot of works on how things look."}, {"start": 15.32, "end": 20.52, "text": " However, in order to create a more complete sensorial experience, we also have to simulate"}, {"start": 20.52, "end": 22.68, "text": " how these things sound."}, {"start": 22.68, "end": 26.8, "text": " And today, we are going to have a look at a really cool piece of work that simulates"}, {"start": 26.8, "end": 33.480000000000004, "text": " the sound of virtual elastic rods made of aluminum, steel, oak tree and rubber."}, {"start": 33.480000000000004, "end": 38.08, "text": " And of course, before you ask, this also means that there will be sound simulations of"}, {"start": 38.08, "end": 41.36, "text": " everyone's favorite toy, the Walking Slinky."}, {"start": 41.36, "end": 48.72, "text": " As for all papers that have anything to do with sound synthesis, I recommend using"}, {"start": 48.72, "end": 51.08, "text": " a pair of headphones for this episode."}, {"start": 51.08, "end": 56.0, "text": " The sound emerging from these elastic rods is particularly difficult to simulate because"}, {"start": 56.0, "end": 61.56, "text": " of the fact that sound frequencies vary quite a bit over time and the objects themselves"}, {"start": 61.56, "end": 65.84, "text": " are also in motion and subject to deformations during the simulation."}, {"start": 65.84, "end": 70.84, "text": " And as you will see with the Slinky, we potentially have tens of thousands of contact events"}, {"start": 70.84, "end": 71.84, "text": " in the meantime."}, {"start": 71.84, "end": 90.36, "text": " Let's have a look at some results."}, {"start": 90.36, "end": 95.24000000000001, "text": " For the fellow scholars who are worried about the validity of these Star Wars sounds, I"}, {"start": 95.24000000000001, "end": 98.92, "text": " know you're out there, make sure to watch the video until the end."}, {"start": 98.92, "end": 103.72, "text": " The authors of the paper proposed a dipole model to create these simulations."}, {"start": 103.72, "end": 108.48, "text": " Dipoles are typically used to approximate electric and magnetic fields in physics and in"}, {"start": 108.48, "end": 113.16, "text": " this case it is really amazing to see an application of it for sound synthesis."}, {"start": 113.16, "end": 118.96000000000001, "text": " For instance, in most cases, these sound waves are typically symmetric around 2D cross sections"}, {"start": 118.96000000000001, "end": 123.4, "text": " of these objects which can be described by a dipole model quite well."}, {"start": 123.4, "end": 128.36, "text": " Also, it is computationally quite effective and can eliminate these lengthy pre-computation"}, {"start": 128.36, "end": 131.84, "text": " steps that are typically present in previous techniques."}, {"start": 131.84, "end": 136.8, "text": " There are also comparisons against the state of the art and we can hear how much richer"}, {"start": 136.8, "end": 152.68, "text": " the sound of this new technique is."}, {"start": 152.68, "end": 157.60000000000002, "text": " And as you know all too well, I love all papers that have something to do with the real world"}, {"start": 157.6, "end": 158.6, "text": " around us."}, {"start": 158.6, "end": 162.84, "text": " And the reason for this is that we can try the very best kind of validation for these"}, {"start": 162.84, "end": 171.76, "text": " algorithms and this is when we let reality be our judge."}, {"start": 171.76, "end": 176.44, "text": " Some frequency plots are also available to validate the output of the algorithm against"}, {"start": 176.44, "end": 178.95999999999998, "text": " the real world sound samples from the lab."}, {"start": 178.95999999999998, "end": 184.16, "text": " It is really amazing to see that we can use science to breathe more life in our virtual"}, {"start": 184.16, "end": 185.16, "text": " worlds."}, {"start": 185.16, "end": 188.44, "text": " Watch in and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=343n8xwozJI
Interactive Green-Screen Keying | Two Minute Papers #174
The paper "Interactive High-Quality Green-Screen Keying via Color Unmixing" is available here: http://people.inf.ethz.ch/aksoyy/keying/ Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://flic.kr/p/RiCCF2 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. In the film industry, we can often see footage of a human walking on the moon, fighting underwater or appearing in any environment without actually going there. To do this, a piece of footage of the actor is recorded in front of a green screen and then the background of the scene is changed to something else. This process is called green screen keying and in theory, this sounds simple enough that make no mistake, this is a challenging problem. Here's why. Issue number one is that separating the foreground from the background is non-trivial and is not a fully automatic process. Let's call this semi-automatic because the compositing artist starts drawing these separation masks and even though there is some help from pre-existing software, it still takes quite a bit of manual labor. For instance, in this example, it is extremely difficult to create a perfect separation between the background and the hair of the actor. Our eyes are extremely keen on catching such details, so even the slightest inaccuracies are going to appear as glaring mistakes. This takes a ton of time and effort from the side of the artist and we haven't even talked about tracking the changes between frames as we are talking about video animations. I think it is now easy to see that this is a hugely relevant problem in the post-production of feature films. And now onto issue number two, which is subtracting indirect illumination from this footage. This is a beautiful light transport effect where the color of different diffuse objects bleed onto each other. In this case, the green color of the background bleeds onto the karate uniform. That is normally a beautiful effect, but here it is highly undesirable because if we put this character in a different environment, it won't look like it belongs there. It will look more like one of those super fake Photoshop disasters that we see everywhere on the internet. And this technique offers a novel solution to this key-ing problem. First, we are asked to scribble on the screen and mark the most dominant colors of the scene. This we only have to do once, even though we are processing an entire video. As a result, we get an initial map where we can easily fix some of the issues. This is very easy and intuitive, not like those long sessions spent with pixel by pixel editing. These colors are then propagated to the entirety of the animation. The final results are compared to a ton of already existing methods on the market, and this one smokes them all. However, what is even more surprising is that it is also way better than what an independent artist produced which took 10 times that long. Other comparisons are also made for removing indirect illumination, which is also referred to as color and mixing in the paper. It is also shown that the algorithm is not too sensitive to this choice of dominant colors, so there is room for amazing follow-up papers to make the process a bit more automatic. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.5200000000000005, "end": 9.620000000000001, "text": " In the film industry, we can often see footage of a human walking on the moon, fighting"}, {"start": 9.620000000000001, "end": 14.56, "text": " underwater or appearing in any environment without actually going there."}, {"start": 14.56, "end": 19.6, "text": " To do this, a piece of footage of the actor is recorded in front of a green screen and"}, {"start": 19.6, "end": 23.44, "text": " then the background of the scene is changed to something else."}, {"start": 23.44, "end": 28.560000000000002, "text": " This process is called green screen keying and in theory, this sounds simple enough that"}, {"start": 28.56, "end": 31.56, "text": " make no mistake, this is a challenging problem."}, {"start": 31.56, "end": 32.56, "text": " Here's why."}, {"start": 32.56, "end": 38.44, "text": " Issue number one is that separating the foreground from the background is non-trivial and is not"}, {"start": 38.44, "end": 40.2, "text": " a fully automatic process."}, {"start": 40.2, "end": 45.84, "text": " Let's call this semi-automatic because the compositing artist starts drawing these separation"}, {"start": 45.84, "end": 50.8, "text": " masks and even though there is some help from pre-existing software, it still takes quite"}, {"start": 50.8, "end": 52.56, "text": " a bit of manual labor."}, {"start": 52.56, "end": 58.08, "text": " For instance, in this example, it is extremely difficult to create a perfect separation between"}, {"start": 58.08, "end": 60.68, "text": " the background and the hair of the actor."}, {"start": 60.68, "end": 66.03999999999999, "text": " Our eyes are extremely keen on catching such details, so even the slightest inaccuracies"}, {"start": 66.03999999999999, "end": 68.84, "text": " are going to appear as glaring mistakes."}, {"start": 68.84, "end": 73.03999999999999, "text": " This takes a ton of time and effort from the side of the artist and we haven't even"}, {"start": 73.03999999999999, "end": 78.28, "text": " talked about tracking the changes between frames as we are talking about video animations."}, {"start": 78.28, "end": 82.84, "text": " I think it is now easy to see that this is a hugely relevant problem in the post-production"}, {"start": 82.84, "end": 84.32, "text": " of feature films."}, {"start": 84.32, "end": 90.08, "text": " And now onto issue number two, which is subtracting indirect illumination from this footage."}, {"start": 90.08, "end": 95.03999999999999, "text": " This is a beautiful light transport effect where the color of different diffuse objects"}, {"start": 95.03999999999999, "end": 96.83999999999999, "text": " bleed onto each other."}, {"start": 96.83999999999999, "end": 101.52, "text": " In this case, the green color of the background bleeds onto the karate uniform."}, {"start": 101.52, "end": 106.63999999999999, "text": " That is normally a beautiful effect, but here it is highly undesirable because if we put"}, {"start": 106.63999999999999, "end": 110.96, "text": " this character in a different environment, it won't look like it belongs there."}, {"start": 110.96, "end": 115.67999999999999, "text": " It will look more like one of those super fake Photoshop disasters that we see everywhere"}, {"start": 115.67999999999999, "end": 116.88, "text": " on the internet."}, {"start": 116.88, "end": 120.47999999999999, "text": " And this technique offers a novel solution to this key-ing problem."}, {"start": 120.47999999999999, "end": 125.8, "text": " First, we are asked to scribble on the screen and mark the most dominant colors of the"}, {"start": 125.8, "end": 126.8, "text": " scene."}, {"start": 126.8, "end": 131.28, "text": " This we only have to do once, even though we are processing an entire video."}, {"start": 131.28, "end": 136.51999999999998, "text": " As a result, we get an initial map where we can easily fix some of the issues."}, {"start": 136.52, "end": 141.96, "text": " This is very easy and intuitive, not like those long sessions spent with pixel by pixel"}, {"start": 141.96, "end": 142.96, "text": " editing."}, {"start": 142.96, "end": 146.72, "text": " These colors are then propagated to the entirety of the animation."}, {"start": 146.72, "end": 152.36, "text": " The final results are compared to a ton of already existing methods on the market, and"}, {"start": 152.36, "end": 154.0, "text": " this one smokes them all."}, {"start": 154.0, "end": 159.44, "text": " However, what is even more surprising is that it is also way better than what an independent"}, {"start": 159.44, "end": 163.24, "text": " artist produced which took 10 times that long."}, {"start": 163.24, "end": 168.04000000000002, "text": " Other comparisons are also made for removing indirect illumination, which is also referred"}, {"start": 168.04000000000002, "end": 170.56, "text": " to as color and mixing in the paper."}, {"start": 170.56, "end": 176.08, "text": " It is also shown that the algorithm is not too sensitive to this choice of dominant colors,"}, {"start": 176.08, "end": 181.12, "text": " so there is room for amazing follow-up papers to make the process a bit more automatic."}, {"start": 181.12, "end": 201.24, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=EGnbAgbRIh4
Refocusing Videos With Neural Networks | Two Minute Papers #173
The paper "Light Field Video Capture Using a Learning-Based Hybrid Imaging System" and its implementation is available here: https://arxiv.org/abs/1705.02997 https://github.com/junyanz/light-field-video Recommended for you: Amazing Slow Motion Videos With Optical Flow - https://www.youtube.com/watch?v=7aLda2E0Yyg Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Steef, Sunil Kim, Torsten Reil, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-272263/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karojona Ifeher. Whenever we take an image with our camera and look at it after an event, we often feel that many of them are close to perfect. If only it was less blurry or the focus distance was a bit further away. But the magic moment is now gone, and there's nothing to do other than cursing at the blurry footage that we are left with when showing it to our friends. However, if we have access to light fields, we can change some camera parameters after the photo was taken. This includes changing the focal distance or even slightly adjusting the viewpoint of the camera. How cool is that? This can be accomplished by a light field camera, which is also referred to as a planoptic camera. This tries to record not only light intensities, but the direction of incoming light as well. Earlier, this was typically achieved by using an array of cameras that's both expensive and cumbersome. And here comes the problem with using only one light field camera. Because of the increased amount of data that they have to record, current light field cameras are only able to take three frames per second. That's hardly satisfying if we wish to do this sort of post editing for videos. This work offers a novel technique to remedy this situation by attaching a standard camera to this light field camera. The goal is that the standard camera has 30, so tons of frames per second, but with little additional information and a light field camera, which has only a few frames per second, but with a ton of additional information. If we stitch all this information together in a smart way, maybe it is a possibility to get full light field editing for videos. Earlier, we have talked about interpolation techniques that can fill some of the missing frames in videos. And this way, we can fill in maybe every other frame in a footage, or we can be a bit more generous than that. However, if we are shown three frames per second and we have to create a smooth video by filling the blanks, would almost be like asking an algorithm to create a movie from a comic book. This would be awesome, but we're not there yet. Too much information is missing. This teaching process works with a bit more information than this, and the key idea is to use two convolutional neural networks to fill in the blanks. One is used to predict flows, which describe the movements and rotations of the objects in the scene, and one to predict the final appearance of the objects. Basically, one for how they move and one for how they look. And the results are just absolutely incredible. It is also blazing fast and takes less than a tenth of a second to create one of these new views. Here, you can see how the final program is able to change the focal distance of any of the frames in our video, or we can even click on something in the image to get it in focus. And all this is done after the video has been taken. The source code of this project is also available. With some more improvements, this could be tremendously useful in the film industry, because the directors could adjust their scenes after the shooting and not just sigh over the inaccuracies and missed opportunities. And this is just one of the many possible other applications. Absolutely amazing. If you have enjoyed this episode, don't forget to subscribe to Two Minute Papers and also make sure to click the bell icon to never miss an episode. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.84, "text": " Dear Fellow Scholars, this is two minute papers with Karojona Ifeher."}, {"start": 4.84, "end": 9.4, "text": " Whenever we take an image with our camera and look at it after an event, we often feel"}, {"start": 9.4, "end": 12.24, "text": " that many of them are close to perfect."}, {"start": 12.24, "end": 16.84, "text": " If only it was less blurry or the focus distance was a bit further away."}, {"start": 16.84, "end": 21.68, "text": " But the magic moment is now gone, and there's nothing to do other than cursing at the blurry"}, {"start": 21.68, "end": 25.0, "text": " footage that we are left with when showing it to our friends."}, {"start": 25.0, "end": 30.560000000000002, "text": " However, if we have access to light fields, we can change some camera parameters after"}, {"start": 30.560000000000002, "end": 32.2, "text": " the photo was taken."}, {"start": 32.2, "end": 36.8, "text": " This includes changing the focal distance or even slightly adjusting the viewpoint of"}, {"start": 36.8, "end": 37.8, "text": " the camera."}, {"start": 37.8, "end": 39.480000000000004, "text": " How cool is that?"}, {"start": 39.480000000000004, "end": 44.32, "text": " This can be accomplished by a light field camera, which is also referred to as a planoptic"}, {"start": 44.32, "end": 45.32, "text": " camera."}, {"start": 45.32, "end": 50.96, "text": " This tries to record not only light intensities, but the direction of incoming light as well."}, {"start": 50.96, "end": 56.44, "text": " Earlier, this was typically achieved by using an array of cameras that's both expensive"}, {"start": 56.44, "end": 57.52, "text": " and cumbersome."}, {"start": 57.52, "end": 61.24, "text": " And here comes the problem with using only one light field camera."}, {"start": 61.24, "end": 65.28, "text": " Because of the increased amount of data that they have to record, current light field"}, {"start": 65.28, "end": 69.44, "text": " cameras are only able to take three frames per second."}, {"start": 69.44, "end": 74.12, "text": " That's hardly satisfying if we wish to do this sort of post editing for videos."}, {"start": 74.12, "end": 79.64, "text": " This work offers a novel technique to remedy this situation by attaching a standard camera"}, {"start": 79.64, "end": 81.24, "text": " to this light field camera."}, {"start": 81.24, "end": 86.96000000000001, "text": " The goal is that the standard camera has 30, so tons of frames per second, but with little"}, {"start": 86.96000000000001, "end": 92.44, "text": " additional information and a light field camera, which has only a few frames per second,"}, {"start": 92.44, "end": 95.2, "text": " but with a ton of additional information."}, {"start": 95.2, "end": 100.6, "text": " If we stitch all this information together in a smart way, maybe it is a possibility to"}, {"start": 100.6, "end": 103.6, "text": " get full light field editing for videos."}, {"start": 103.6, "end": 108.08, "text": " Earlier, we have talked about interpolation techniques that can fill some of the missing"}, {"start": 108.08, "end": 109.6, "text": " frames in videos."}, {"start": 109.6, "end": 114.52, "text": " And this way, we can fill in maybe every other frame in a footage, or we can be a bit more"}, {"start": 114.52, "end": 115.67999999999999, "text": " generous than that."}, {"start": 115.67999999999999, "end": 120.96, "text": " However, if we are shown three frames per second and we have to create a smooth video by"}, {"start": 120.96, "end": 126.28, "text": " filling the blanks, would almost be like asking an algorithm to create a movie from a comic"}, {"start": 126.28, "end": 127.28, "text": " book."}, {"start": 127.28, "end": 129.56, "text": " This would be awesome, but we're not there yet."}, {"start": 129.56, "end": 131.56, "text": " Too much information is missing."}, {"start": 131.56, "end": 135.95999999999998, "text": " This teaching process works with a bit more information than this, and the key idea is"}, {"start": 135.96, "end": 140.04000000000002, "text": " to use two convolutional neural networks to fill in the blanks."}, {"start": 140.04000000000002, "end": 144.8, "text": " One is used to predict flows, which describe the movements and rotations of the objects"}, {"start": 144.8, "end": 149.24, "text": " in the scene, and one to predict the final appearance of the objects."}, {"start": 149.24, "end": 153.12, "text": " Basically, one for how they move and one for how they look."}, {"start": 153.12, "end": 156.0, "text": " And the results are just absolutely incredible."}, {"start": 156.0, "end": 161.04000000000002, "text": " It is also blazing fast and takes less than a tenth of a second to create one of these"}, {"start": 161.04000000000002, "end": 162.04000000000002, "text": " new views."}, {"start": 162.04, "end": 167.64, "text": " Here, you can see how the final program is able to change the focal distance of any of"}, {"start": 167.64, "end": 172.18, "text": " the frames in our video, or we can even click on something in the image to get it in"}, {"start": 172.18, "end": 173.18, "text": " focus."}, {"start": 173.18, "end": 176.28, "text": " And all this is done after the video has been taken."}, {"start": 176.28, "end": 178.79999999999998, "text": " The source code of this project is also available."}, {"start": 178.79999999999998, "end": 183.51999999999998, "text": " With some more improvements, this could be tremendously useful in the film industry, because"}, {"start": 183.51999999999998, "end": 189.72, "text": " the directors could adjust their scenes after the shooting and not just sigh over the inaccuracies"}, {"start": 189.72, "end": 191.23999999999998, "text": " and missed opportunities."}, {"start": 191.24, "end": 195.04000000000002, "text": " And this is just one of the many possible other applications."}, {"start": 195.04000000000002, "end": 196.04000000000002, "text": " Absolutely amazing."}, {"start": 196.04000000000002, "end": 200.16, "text": " If you have enjoyed this episode, don't forget to subscribe to Two Minute Papers and also"}, {"start": 200.16, "end": 203.32000000000002, "text": " make sure to click the bell icon to never miss an episode."}, {"start": 203.32, "end": 223.44, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=twWHwVaBfM8
Phace: Physics-based Face Modeling and Animation | Two Minute Papers #172
The paper "Phace: Physics-based Face Modeling and Animation" is available here: http://lgg.epfl.ch/publications/2017/Phace/index.php Our Patreon page: https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Sunil Kim, Torsten Reil, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-984031/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojiwana Ifehir. This work is about transferring our gestures onto a virtual human's face in a way that is physically correct. This means that not only the changes in the facial geometry are transferred to a digital character. No, no, no. Here is how it works. This piece of work uses a really cool digital representation of our face that contains not only geometry, but there's also information about the bone and flesh and muscle structures as well. This means that it builds on a physically accurate model which synthesizes animations, where this human face is actuated by the appropriate muscles. We start out with a surface scan of the user, which through a registration step is then converted to a set of expressions that we wish to achieve. The inverse physics module tries to guess exactly which muscles are used and how they are used to achieve these target expressions. The animation step takes information of how the desired target expressions evolve in time and some physics information, such as gravity or wind, and the forward physics unit computes the final simulation of the digital character. So while we are talking about the effects of gravity and wind, here you can see how this can create more convincing outputs because these characters really become a part of their digital environment. As a result, the body mass index of a character can also be changed in both directions, slimming or fattening the face. Lip enhancement is also a possibility. If we had super high resolution facial scans, maybe a follow-up work could simulate the effects of Botox injections. How could would that be? Also, one of my favorite features of this technique is that it also enables artistic editing. By means of drawing, we can also specify a map of stiffness and mass distributions, and if we feel cruel enough, we can create a barely functioning human face to model and animate virtual zombies. Imagine what artists could do with this, especially in the presence of super high resolution textures and photorealistic rendering. Oh my! Another glimpse of the future of computer graphics and animation. Make sure to have a look at the paper for more applications. For instance, they also demonstrate the possibility of modifying the chin and the jawbone. They even have some result in simulating the effect of Bell's palsy, which is the paralysis of facial muscles on one side. While we are at this high note of illnesses, if you enjoy this episode and would like to support us, you can pick up really cool perks like early access for all of these episodes on Patreon. The link is available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with Karojiwana Ifehir."}, {"start": 4.46, "end": 11.46, "text": " This work is about transferring our gestures onto a virtual human's face in a way that is physically correct."}, {"start": 11.46, "end": 17.34, "text": " This means that not only the changes in the facial geometry are transferred to a digital character."}, {"start": 17.34, "end": 19.62, "text": " No, no, no. Here is how it works."}, {"start": 19.62, "end": 26.8, "text": " This piece of work uses a really cool digital representation of our face that contains not only geometry,"}, {"start": 26.8, "end": 31.68, "text": " but there's also information about the bone and flesh and muscle structures as well."}, {"start": 31.68, "end": 36.86, "text": " This means that it builds on a physically accurate model which synthesizes animations,"}, {"start": 36.86, "end": 40.56, "text": " where this human face is actuated by the appropriate muscles."}, {"start": 40.56, "end": 49.2, "text": " We start out with a surface scan of the user, which through a registration step is then converted to a set of expressions that we wish to achieve."}, {"start": 49.2, "end": 54.1, "text": " The inverse physics module tries to guess exactly which muscles are used"}, {"start": 54.1, "end": 57.660000000000004, "text": " and how they are used to achieve these target expressions."}, {"start": 57.660000000000004, "end": 64.6, "text": " The animation step takes information of how the desired target expressions evolve in time and some physics information,"}, {"start": 64.6, "end": 71.36, "text": " such as gravity or wind, and the forward physics unit computes the final simulation of the digital character."}, {"start": 71.36, "end": 78.9, "text": " So while we are talking about the effects of gravity and wind, here you can see how this can create more convincing outputs"}, {"start": 78.9, "end": 85.9, "text": " because these characters really become a part of their digital environment."}, {"start": 85.9, "end": 95.7, "text": " As a result, the body mass index of a character can also be changed in both directions, slimming or fattening the face."}, {"start": 95.7, "end": 98.5, "text": " Lip enhancement is also a possibility."}, {"start": 98.5, "end": 105.4, "text": " If we had super high resolution facial scans, maybe a follow-up work could simulate the effects of Botox injections."}, {"start": 105.4, "end": 106.80000000000001, "text": " How could would that be?"}, {"start": 106.8, "end": 112.3, "text": " Also, one of my favorite features of this technique is that it also enables artistic editing."}, {"start": 112.3, "end": 120.1, "text": " By means of drawing, we can also specify a map of stiffness and mass distributions, and if we feel cruel enough,"}, {"start": 120.1, "end": 126.0, "text": " we can create a barely functioning human face to model and animate virtual zombies."}, {"start": 126.0, "end": 133.3, "text": " Imagine what artists could do with this, especially in the presence of super high resolution textures and photorealistic rendering."}, {"start": 133.3, "end": 134.3, "text": " Oh my!"}, {"start": 134.3, "end": 137.8, "text": " Another glimpse of the future of computer graphics and animation."}, {"start": 137.8, "end": 140.8, "text": " Make sure to have a look at the paper for more applications."}, {"start": 140.8, "end": 146.3, "text": " For instance, they also demonstrate the possibility of modifying the chin and the jawbone."}, {"start": 146.3, "end": 154.20000000000002, "text": " They even have some result in simulating the effect of Bell's palsy, which is the paralysis of facial muscles on one side."}, {"start": 154.20000000000002, "end": 159.5, "text": " While we are at this high note of illnesses, if you enjoy this episode and would like to support us,"}, {"start": 159.5, "end": 165.0, "text": " you can pick up really cool perks like early access for all of these episodes on Patreon."}, {"start": 165.0, "end": 167.0, "text": " The link is available in the video description."}, {"start": 167.0, "end": 190.5, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7x2UvvD48Fw
Real-Time Hair Rendering With Deep Opacity Maps | Two Minute Papers #171
The paper "Deep Opacity Maps" is available here: http://www.cemyuksel.com/research/deepopacity/ Unofficial implementation: http://prideout.net/blog/?p=69 Recommended for you: The Dunning-Kruger Effect - https://www.youtube.com/watch?v=4Y7RIAgOpn0 Are We Living In a Computer Simulation? - https://www.youtube.com/watch?v=ATN9oqMF_qk Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Sunil Kim, Torsten Reil, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1853957/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kato Ejolna Ifehir. In earlier episodes, we've seen plenty of video footage about hair simulations and rendering. And today, we are going to look at a cool new technique that produces self-shadowing effects for hair and fur. In this image pair, you can see this drastic difference that shows how prominent this effect is in the visual appearance of hair. Just look at that. Beautiful. But computing such a thing is extremely costly. Since we have a dense piece of geometry, for instance hundreds of thousands of hair strands, we have to know how each one occludes the other ones. This would take hopelessly long to compute. To even get a program that executes in a reasonable amount of time, we clearly need to simplify the problem further. An earlier technique takes a few planes that cut the hair volume into layers. These planes are typically regularly spaced outward from the light sources, and it is much easier to work with a handful of these volume segments than with the full geometry. The more planes we use, the more layers we obtain, and the higher quality results we can expect. However, even if we can do this in real time, we will produce unrealistic images when using around 16 layers. Well, of course, we should then crank up the number of layers some more. If we do that, for instance by now, using 128 layers, we can expect better quality results, but we'll be able to process an image only twice a second, which is far from competitive. And even then, the final results still contain layering artifacts, and are not very close to the ground truth. There has to be a better way to do this. And with this new technique called Deep Opacity Maps, these layers are chosen more wisely, and this way we can achieve higher quality results with using only three layers, and it runs easily in real time. It is also more memory efficient than previous techniques. The key idea is that if we look at the hair from the light sources point of view, we can record how far away different parts of the geometry are from the light source. Then we can create the new layers further and further away according to this shape. This way, the layers are not plain or anymore. They adapt to the scene that we have at hand, and contain significantly more useful occlusion information. As you can see, this new technique blows all previous methods away and is incredibly simple. I have found an implementation from Philip Rideout. The link to this is available in the video description. If you have found more, let me know and I'll include your findings in the video description for the fellow tinkerers out there. The paper is ample in comparisons. Make sure to have a look at that too. And sometimes I get some messages saying, Karoi, why do you bother covering papers from so many years ago? It doesn't make any sense. And here, you can see that part of the excitement of two minute papers is that the next episode can be about absolutely anything. The series has been mostly focusing on computer graphics and machine learning papers, but don't forget that we also have an episode on whether we are living in a simulation or the stunning Kruger effect and so much more. I've put a link to both of them in the video description for your enjoyment. The other reason for covering older papers is that a lot of people don't know about them, and if we can help, just a tiny bit to make sure these incredible works see more widespread adoption with Donar job well. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Ejolna Ifehir."}, {"start": 4.32, "end": 10.0, "text": " In earlier episodes, we've seen plenty of video footage about hair simulations and rendering."}, {"start": 10.0, "end": 17.12, "text": " And today, we are going to look at a cool new technique that produces self-shadowing effects for hair and fur."}, {"start": 17.12, "end": 22.96, "text": " In this image pair, you can see this drastic difference that shows how prominent this effect is"}, {"start": 22.96, "end": 26.96, "text": " in the visual appearance of hair. Just look at that. Beautiful."}, {"start": 26.96, "end": 32.4, "text": " But computing such a thing is extremely costly. Since we have a dense piece of geometry,"}, {"start": 32.4, "end": 38.64, "text": " for instance hundreds of thousands of hair strands, we have to know how each one occludes the other ones."}, {"start": 38.64, "end": 44.08, "text": " This would take hopelessly long to compute. To even get a program that executes in a reasonable"}, {"start": 44.08, "end": 50.0, "text": " amount of time, we clearly need to simplify the problem further. An earlier technique takes a few"}, {"start": 50.0, "end": 56.88, "text": " planes that cut the hair volume into layers. These planes are typically regularly spaced outward"}, {"start": 56.88, "end": 62.160000000000004, "text": " from the light sources, and it is much easier to work with a handful of these volume segments"}, {"start": 62.160000000000004, "end": 69.52000000000001, "text": " than with the full geometry. The more planes we use, the more layers we obtain, and the higher quality results we can expect."}, {"start": 69.52000000000001, "end": 76.96000000000001, "text": " However, even if we can do this in real time, we will produce unrealistic images when using around 16 layers."}, {"start": 76.96000000000001, "end": 81.2, "text": " Well, of course, we should then crank up the number of layers some more."}, {"start": 81.2, "end": 88.08, "text": " If we do that, for instance by now, using 128 layers, we can expect better quality results,"}, {"start": 88.08, "end": 93.36, "text": " but we'll be able to process an image only twice a second, which is far from competitive."}, {"start": 93.36, "end": 99.92, "text": " And even then, the final results still contain layering artifacts, and are not very close to the ground truth."}, {"start": 99.92, "end": 106.16, "text": " There has to be a better way to do this. And with this new technique called Deep Opacity Maps,"}, {"start": 106.16, "end": 112.32, "text": " these layers are chosen more wisely, and this way we can achieve higher quality results with using"}, {"start": 112.32, "end": 118.72, "text": " only three layers, and it runs easily in real time. It is also more memory efficient than previous"}, {"start": 118.72, "end": 123.84, "text": " techniques. The key idea is that if we look at the hair from the light sources point of view,"}, {"start": 123.84, "end": 130.0, "text": " we can record how far away different parts of the geometry are from the light source."}, {"start": 130.0, "end": 134.8, "text": " Then we can create the new layers further and further away according to this shape."}, {"start": 134.8, "end": 140.08, "text": " This way, the layers are not plain or anymore. They adapt to the scene that we have at hand,"}, {"start": 140.08, "end": 144.56, "text": " and contain significantly more useful occlusion information. As you can see,"}, {"start": 144.56, "end": 149.28, "text": " this new technique blows all previous methods away and is incredibly simple."}, {"start": 152.88000000000002, "end": 158.24, "text": " I have found an implementation from Philip Rideout. The link to this is available in the video"}, {"start": 158.24, "end": 162.72000000000003, "text": " description. If you have found more, let me know and I'll include your findings in the video"}, {"start": 162.72, "end": 168.0, "text": " description for the fellow tinkerers out there. The paper is ample in comparisons. Make sure to"}, {"start": 168.0, "end": 173.6, "text": " have a look at that too. And sometimes I get some messages saying, Karoi, why do you bother"}, {"start": 173.6, "end": 180.0, "text": " covering papers from so many years ago? It doesn't make any sense. And here, you can see that part"}, {"start": 180.0, "end": 186.16, "text": " of the excitement of two minute papers is that the next episode can be about absolutely anything."}, {"start": 186.16, "end": 191.28, "text": " The series has been mostly focusing on computer graphics and machine learning papers, but don't"}, {"start": 191.28, "end": 196.24, "text": " forget that we also have an episode on whether we are living in a simulation or the stunning"}, {"start": 196.24, "end": 201.36, "text": " Kruger effect and so much more. I've put a link to both of them in the video description for your"}, {"start": 201.36, "end": 206.96, "text": " enjoyment. The other reason for covering older papers is that a lot of people don't know about them,"}, {"start": 206.96, "end": 212.8, "text": " and if we can help, just a tiny bit to make sure these incredible works see more widespread"}, {"start": 212.8, "end": 224.88000000000002, "text": " adoption with Donar job well. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HUFh8cEDeII
Visualizing Fluid Flow With Clebsch Maps | Two Minute Papers #170
The paper "Inside Fluids: Clebsch Maps for Visualization and Processing" and its source code are available here: http://multires.caltech.edu/pubs/Clebsch.pdf http://multires.caltech.edu/pubs/ClebschCodes.zip Recommended for you: Schrödinger's Smoke - https://www.youtube.com/watch?v=heY2gfXSHBo Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Sunil Kim, Torsten Reil, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2427263/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. Everyone who watches this series knows that among many other scientific topics I am severely addicted to fluid simulations. And today it's time to relapse. And this time we are going to run wind tunnel tests on hummingbirds. Typically when a new engine, airplane, or even a new phone is being designed, we are interested in knowing how the heat flow and dissipation will look like, preferably before we are designing an object. To do so, we often run some virtual wind tunnel tests and optimize our design until we are happy with the results. Then we can proceed to build these new contractions. Simulating the pressure distribution and the aerodynamic forces is a large topic. However, visualizing these results is at least as well studied and difficult as writing a simulator. What is it exactly that we are interested in? Even if we have an intuitive particle-based simulation, millions and millions of particles, it is clearly impossible to show the path for every one of them. Grid-based simulations are often even more challenging to visualize well. So how do we choose what to visualize and what not to show on the screen? And in this paper, we can witness a new way of visualizing velocity and vorticity fields. And this visualization happens through clapsh maps. This is a mathematical transformation where we create a sphere and a set of points on this sphere correspond to vortex lines and their evolution over time. However, if instead of only points, we pick an entire region on this sphere, as you can see the north and south pole regions here, we obtain vortex tubes. These vortex tubes provide an accurate representation of the vorticity information within the simulation, and this is one of the rare cases where the validity of such a solution can also be shown. Such a crazy idea, loving it. And with this, we can get a better understanding of the airflow around the wings of the hummingbird, but we can also learn more from pre-existing NASA aircraft data sets. Have a look at these incredible results. Publishing a paper at the CIGARF conference is an incredible feat that typically takes a few brilliant guys and several years of unbelievably hard work. Well, apparently this is not such a challenge for Albert Churn, who was also the first author of this and the Schrodinger smoke paper just a year ago that we reported on. He's doing incredible work at taking a piece of mathematical theory and showing remarkable applications of it in new areas where we would think it doesn't belong at all. The link is available in the video description, both for this and the previous works, make sure to have a look. There's lots of beautifully written mathematics to be read there that seems to be from another world. It's a truly unique experience. The paper reports that the source code is also available, but I was unable to find it yet. If you have found a public implementation, please let me know and I'll update the video description with your link. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.04, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.04, "end": 10.540000000000001, "text": " Everyone who watches this series knows that among many other scientific topics I am severely"}, {"start": 10.540000000000001, "end": 12.74, "text": " addicted to fluid simulations."}, {"start": 12.74, "end": 15.22, "text": " And today it's time to relapse."}, {"start": 15.22, "end": 19.7, "text": " And this time we are going to run wind tunnel tests on hummingbirds."}, {"start": 19.7, "end": 25.54, "text": " Typically when a new engine, airplane, or even a new phone is being designed, we are interested"}, {"start": 25.54, "end": 30.74, "text": " in knowing how the heat flow and dissipation will look like, preferably before we are designing"}, {"start": 30.74, "end": 31.74, "text": " an object."}, {"start": 31.74, "end": 37.58, "text": " To do so, we often run some virtual wind tunnel tests and optimize our design until we are"}, {"start": 37.58, "end": 39.3, "text": " happy with the results."}, {"start": 39.3, "end": 42.66, "text": " Then we can proceed to build these new contractions."}, {"start": 42.66, "end": 47.3, "text": " Simulating the pressure distribution and the aerodynamic forces is a large topic."}, {"start": 47.3, "end": 52.739999999999995, "text": " However, visualizing these results is at least as well studied and difficult as writing"}, {"start": 52.739999999999995, "end": 53.82, "text": " a simulator."}, {"start": 53.82, "end": 56.14, "text": " What is it exactly that we are interested in?"}, {"start": 56.14, "end": 61.5, "text": " Even if we have an intuitive particle-based simulation, millions and millions of particles,"}, {"start": 61.5, "end": 65.62, "text": " it is clearly impossible to show the path for every one of them."}, {"start": 65.62, "end": 69.58, "text": " Grid-based simulations are often even more challenging to visualize well."}, {"start": 69.58, "end": 74.06, "text": " So how do we choose what to visualize and what not to show on the screen?"}, {"start": 74.06, "end": 80.06, "text": " And in this paper, we can witness a new way of visualizing velocity and vorticity fields."}, {"start": 80.06, "end": 83.5, "text": " And this visualization happens through clapsh maps."}, {"start": 83.5, "end": 88.86, "text": " This is a mathematical transformation where we create a sphere and a set of points on"}, {"start": 88.86, "end": 94.06, "text": " this sphere correspond to vortex lines and their evolution over time."}, {"start": 94.06, "end": 99.82, "text": " However, if instead of only points, we pick an entire region on this sphere, as you can"}, {"start": 99.82, "end": 105.62, "text": " see the north and south pole regions here, we obtain vortex tubes."}, {"start": 105.62, "end": 112.3, "text": " These vortex tubes provide an accurate representation of the vorticity information within the simulation,"}, {"start": 112.3, "end": 118.14, "text": " and this is one of the rare cases where the validity of such a solution can also be shown."}, {"start": 118.14, "end": 121.06, "text": " Such a crazy idea, loving it."}, {"start": 121.06, "end": 126.53999999999999, "text": " And with this, we can get a better understanding of the airflow around the wings of the hummingbird,"}, {"start": 126.53999999999999, "end": 131.66, "text": " but we can also learn more from pre-existing NASA aircraft data sets."}, {"start": 131.66, "end": 134.5, "text": " Have a look at these incredible results."}, {"start": 134.5, "end": 139.26, "text": " Publishing a paper at the CIGARF conference is an incredible feat that typically takes"}, {"start": 139.26, "end": 144.22, "text": " a few brilliant guys and several years of unbelievably hard work."}, {"start": 144.22, "end": 149.66, "text": " Well, apparently this is not such a challenge for Albert Churn, who was also the first author"}, {"start": 149.66, "end": 154.94, "text": " of this and the Schrodinger smoke paper just a year ago that we reported on."}, {"start": 154.94, "end": 160.54, "text": " He's doing incredible work at taking a piece of mathematical theory and showing remarkable"}, {"start": 160.54, "end": 165.57999999999998, "text": " applications of it in new areas where we would think it doesn't belong at all."}, {"start": 165.58, "end": 170.22, "text": " The link is available in the video description, both for this and the previous works, make"}, {"start": 170.22, "end": 171.22, "text": " sure to have a look."}, {"start": 171.22, "end": 176.10000000000002, "text": " There's lots of beautifully written mathematics to be read there that seems to be from another"}, {"start": 176.10000000000002, "end": 177.10000000000002, "text": " world."}, {"start": 177.10000000000002, "end": 179.3, "text": " It's a truly unique experience."}, {"start": 179.3, "end": 184.10000000000002, "text": " The paper reports that the source code is also available, but I was unable to find it"}, {"start": 184.10000000000002, "end": 185.10000000000002, "text": " yet."}, {"start": 185.10000000000002, "end": 188.62, "text": " If you have found a public implementation, please let me know and I'll update the video"}, {"start": 188.62, "end": 190.02, "text": " description with your link."}, {"start": 190.02, "end": 197.02, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XgB3Xg5st2U
AI Learns Visual Common Sense With New Dataset | Two Minute Papers #169
The paper "The "something something" video database for learning and evaluating visual common sense" is available here: https://arxiv.org/abs/1706.04261 Source for the video results: https://medium.com/@raghavgoyal14/7383596f58df Recommended for you: Recurrent Neural Network Writes Sentences About Images - https://www.youtube.com/watch?v=e-WB4lfg30M Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-569070/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karoizona Ifeher. Today, we are going to talk about a new endeavor to teach some more common sense to learning algorithms. If you remember, in an earlier episode, we talked about an excellent work by Andrei Carpethy, who built an algorithm that looked at an input image and described in a full, well-formed sentence what is depicted there. By the way, he recently became director of AI at Tesla. Before that, he worked at OpenAI freshly after graduating with a PhD. Now that is a scholarly career, if I've ever seen one. Reading about this earlier work was one of those moments when I really had to hold onto my papers, not to fall out of the chair, but of course, as it should be with every new breakthrough. The failure cases were thoroughly discussed. One of the motivations for this new work is that we could improve the results by creating a video database that contains a ton of commonly occurring events that would be useful to learn. These events include moving and picking up or holding, poking, throwing, pouring, or plugging in different things, and much more. The goal is that these neural algorithms would get tons of training data for these, and would be able to distinguish whether a human is showing them something, or just moving things about. The already existing video databases are surprisingly sparse in this sort of information, and in this new, freshly published dataset, we can learn on a hundred, thousand labeled videos to accelerate research in this direction. I love how many of these works are intertwined, and how follow-up research works try to address the weaknesses of previous techniques. Some initial results with learning on this dataset are also reported to kick things off, and they seem quite good if you look at the results here, but since this was not the focus of the paper, we shouldn't expect superhuman performance. However, as almost all papers in research are stepping stones, two more follow-up papers down the line, this will be an entirely different discussion. I'd love to report back to you on the progress later. Super excited for that. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karoizona Ifeher."}, {"start": 4.4, "end": 10.88, "text": " Today, we are going to talk about a new endeavor to teach some more common sense to learning algorithms."}, {"start": 10.88, "end": 16.56, "text": " If you remember, in an earlier episode, we talked about an excellent work by Andrei Carpethy,"}, {"start": 16.56, "end": 23.44, "text": " who built an algorithm that looked at an input image and described in a full, well-formed sentence"}, {"start": 23.44, "end": 28.32, "text": " what is depicted there. By the way, he recently became director of AI at Tesla."}, {"start": 28.32, "end": 33.36, "text": " Before that, he worked at OpenAI freshly after graduating with a PhD."}, {"start": 33.36, "end": 36.64, "text": " Now that is a scholarly career, if I've ever seen one."}, {"start": 36.64, "end": 42.24, "text": " Reading about this earlier work was one of those moments when I really had to hold onto my papers,"}, {"start": 42.24, "end": 47.120000000000005, "text": " not to fall out of the chair, but of course, as it should be with every new breakthrough."}, {"start": 47.120000000000005, "end": 49.84, "text": " The failure cases were thoroughly discussed."}, {"start": 49.84, "end": 55.68, "text": " One of the motivations for this new work is that we could improve the results by creating a video"}, {"start": 55.68, "end": 61.68, "text": " database that contains a ton of commonly occurring events that would be useful to learn."}, {"start": 61.68, "end": 67.6, "text": " These events include moving and picking up or holding, poking, throwing, pouring,"}, {"start": 67.6, "end": 70.56, "text": " or plugging in different things, and much more."}, {"start": 70.56, "end": 75.28, "text": " The goal is that these neural algorithms would get tons of training data for these,"}, {"start": 75.28, "end": 79.68, "text": " and would be able to distinguish whether a human is showing them something,"}, {"start": 79.68, "end": 81.28, "text": " or just moving things about."}, {"start": 81.28, "end": 86.64, "text": " The already existing video databases are surprisingly sparse in this sort of information,"}, {"start": 86.64, "end": 91.28, "text": " and in this new, freshly published dataset, we can learn on a hundred,"}, {"start": 91.28, "end": 95.44, "text": " thousand labeled videos to accelerate research in this direction."}, {"start": 95.44, "end": 101.12, "text": " I love how many of these works are intertwined, and how follow-up research works try to address"}, {"start": 101.12, "end": 106.64, "text": " the weaknesses of previous techniques. Some initial results with learning on this dataset"}, {"start": 106.64, "end": 111.68, "text": " are also reported to kick things off, and they seem quite good if you look at the results here,"}, {"start": 111.68, "end": 116.56, "text": " but since this was not the focus of the paper, we shouldn't expect superhuman performance."}, {"start": 116.56, "end": 122.64, "text": " However, as almost all papers in research are stepping stones, two more follow-up papers down the line,"}, {"start": 122.64, "end": 125.28, "text": " this will be an entirely different discussion."}, {"start": 125.28, "end": 128.4, "text": " I'd love to report back to you on the progress later."}, {"start": 128.4, "end": 129.84, "text": " Super excited for that."}, {"start": 129.84, "end": 142.8, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=vzg5Qe0pTKk
DeepMind's AI Learns Superhuman Relational Reasoning | Two Minute Papers #168
The paper "A simple neural network module for relational reasoning" is available here: https://arxiv.org/abs/1706.01427 Details on our Patreon page: https://www.patreon.com/TwoMinutePapers More on Long Short-Term Memory: Recurrent Neural Network Writes Music and Shakespeare Novels - https://www.youtube.com/watch?v=Jkkjy7dVdaY Recurrent Neural Network Writes Sentences About Images - https://www.youtube.com/watch?v=e-WB4lfg30M Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-674828/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizona Ifehir. This paper is from the Google Deep Mind Guys and is about teaching neural networks to be capable of relational reasoning. This means that we can present the algorithm with an image and ask it relatively complex relational questions. For instance, if we show it this image and ask, what is the color of the object that is closest to the blue object, it would answer red. This is a particularly difficult problem because all the algorithm has access to is a bunch of pixels. In computer code, it is near impossible to mathematically express that in an image something is below or next to something else, especially in three-dimensional scenes. Beyond the list of colors, this requires a cognitive understanding of the entirety of the image. This is something that we humans are amazingly good at, but computer algorithms are dreadful for this type of work, and this work almost feels like teaching common sense to a learning algorithm. This is accomplished by augmenting an already existing neural network with a relational network module. This is implemented on top of a recurrent neural network that we call long short-term memory or LSTM that is able to process sequences of information, for instance, an input sentence. The more seasoned fellow scholars know that we have talked about LSTM's in earlier episodes, and of course, as always, the video description contains these episodes for your enjoyment. Make sure to have a look, you'll love it. As you can see in this result, this relational reasoning also works for three-dimensional scenes as well. The aggregated results in the paper show that this method is not only leaps and bounds beyond the capabilities of already existing algorithms, but, and now, hold on to your papers. In many cases, it also shows superhuman performance. I love seeing these charts in machine learning papers, where several learning algorithms and humans are benchmarked on the same tasks. This paper was barely published, and there is already a first unofficial public implementation, and two research papers have already referenced it. This is such a great testament to the incredible pace of machine learning research these days, to say that it is competitive would be a huge understatement. Achieving high-quality results in relational reasoning is an important cornerstone for achieving general intelligence, and even though there is still much, much more to do, today is one of those days when we can feel that we are a part of the future. The failure cases are also reported in the paper and are definitely worthy of your time and attention. When I asked for permissions to cover this paper in the series, all three scientists from DeepMind happily answered yes within 30 minutes. That's unbelievable. Thanks guys! Also, some of these questions sound like ones that we would get in the easier part of an IQ test. I wouldn't be very surprised to see a learning algorithm complete a full IQ test with flying colors in the near future. If you enjoyed this episode and you feel that eight of these videos a month is worth a dollar, please consider supporting us on Patreon. This way, we can make better videos for your enjoyment. We have recently reached a new milestone, which means that part of these funds will be used to empower research projects. Details are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizona Ifehir."}, {"start": 4.8, "end": 9.48, "text": " This paper is from the Google Deep Mind Guys and is about teaching neural networks to be"}, {"start": 9.48, "end": 12.0, "text": " capable of relational reasoning."}, {"start": 12.0, "end": 17.240000000000002, "text": " This means that we can present the algorithm with an image and ask it relatively complex"}, {"start": 17.240000000000002, "end": 18.88, "text": " relational questions."}, {"start": 18.88, "end": 24.28, "text": " For instance, if we show it this image and ask, what is the color of the object that is"}, {"start": 24.28, "end": 28.400000000000002, "text": " closest to the blue object, it would answer red."}, {"start": 28.4, "end": 33.32, "text": " This is a particularly difficult problem because all the algorithm has access to is a bunch"}, {"start": 33.32, "end": 34.48, "text": " of pixels."}, {"start": 34.48, "end": 41.239999999999995, "text": " In computer code, it is near impossible to mathematically express that in an image something is below"}, {"start": 41.239999999999995, "end": 45.480000000000004, "text": " or next to something else, especially in three-dimensional scenes."}, {"start": 45.480000000000004, "end": 50.8, "text": " Beyond the list of colors, this requires a cognitive understanding of the entirety of"}, {"start": 50.8, "end": 51.8, "text": " the image."}, {"start": 51.8, "end": 57.239999999999995, "text": " This is something that we humans are amazingly good at, but computer algorithms are dreadful"}, {"start": 57.24, "end": 62.52, "text": " for this type of work, and this work almost feels like teaching common sense to a learning"}, {"start": 62.52, "end": 63.52, "text": " algorithm."}, {"start": 63.52, "end": 69.04, "text": " This is accomplished by augmenting an already existing neural network with a relational network"}, {"start": 69.04, "end": 70.04, "text": " module."}, {"start": 70.04, "end": 75.36, "text": " This is implemented on top of a recurrent neural network that we call long short-term"}, {"start": 75.36, "end": 82.16, "text": " memory or LSTM that is able to process sequences of information, for instance, an input"}, {"start": 82.16, "end": 83.16, "text": " sentence."}, {"start": 83.16, "end": 88.44, "text": " The more seasoned fellow scholars know that we have talked about LSTM's in earlier episodes,"}, {"start": 88.44, "end": 93.64, "text": " and of course, as always, the video description contains these episodes for your enjoyment."}, {"start": 93.64, "end": 95.75999999999999, "text": " Make sure to have a look, you'll love it."}, {"start": 95.75999999999999, "end": 100.52, "text": " As you can see in this result, this relational reasoning also works for three-dimensional"}, {"start": 100.52, "end": 101.64, "text": " scenes as well."}, {"start": 101.64, "end": 106.84, "text": " The aggregated results in the paper show that this method is not only leaps and bounds"}, {"start": 106.84, "end": 113.92, "text": " beyond the capabilities of already existing algorithms, but, and now, hold on to your papers."}, {"start": 113.92, "end": 118.16, "text": " In many cases, it also shows superhuman performance."}, {"start": 118.16, "end": 123.2, "text": " I love seeing these charts in machine learning papers, where several learning algorithms and"}, {"start": 123.2, "end": 126.72, "text": " humans are benchmarked on the same tasks."}, {"start": 126.72, "end": 132.2, "text": " This paper was barely published, and there is already a first unofficial public implementation,"}, {"start": 132.2, "end": 135.0, "text": " and two research papers have already referenced it."}, {"start": 135.0, "end": 140.36, "text": " This is such a great testament to the incredible pace of machine learning research these days,"}, {"start": 140.36, "end": 144.56, "text": " to say that it is competitive would be a huge understatement."}, {"start": 144.56, "end": 148.92, "text": " Achieving high-quality results in relational reasoning is an important cornerstone for"}, {"start": 148.92, "end": 154.32, "text": " achieving general intelligence, and even though there is still much, much more to do,"}, {"start": 154.32, "end": 158.84, "text": " today is one of those days when we can feel that we are a part of the future."}, {"start": 158.84, "end": 163.72, "text": " The failure cases are also reported in the paper and are definitely worthy of your time"}, {"start": 163.72, "end": 164.72, "text": " and attention."}, {"start": 164.72, "end": 169.16, "text": " When I asked for permissions to cover this paper in the series, all three scientists from"}, {"start": 169.16, "end": 173.4, "text": " DeepMind happily answered yes within 30 minutes."}, {"start": 173.4, "end": 175.04, "text": " That's unbelievable."}, {"start": 175.04, "end": 176.04, "text": " Thanks guys!"}, {"start": 176.04, "end": 181.04, "text": " Also, some of these questions sound like ones that we would get in the easier part of"}, {"start": 181.04, "end": 182.24, "text": " an IQ test."}, {"start": 182.24, "end": 188.16, "text": " I wouldn't be very surprised to see a learning algorithm complete a full IQ test with flying"}, {"start": 188.16, "end": 190.24, "text": " colors in the near future."}, {"start": 190.24, "end": 194.72, "text": " If you enjoyed this episode and you feel that eight of these videos a month is worth a"}, {"start": 194.72, "end": 197.92000000000002, "text": " dollar, please consider supporting us on Patreon."}, {"start": 197.92000000000002, "end": 200.92000000000002, "text": " This way, we can make better videos for your enjoyment."}, {"start": 200.92000000000002, "end": 205.52, "text": " We have recently reached a new milestone, which means that part of these funds will be"}, {"start": 205.52, "end": 208.32000000000002, "text": " used to empower research projects."}, {"start": 208.32000000000002, "end": 210.56, "text": " Details are available in the video description."}, {"start": 210.56, "end": 229.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ldO7RD3s4_s
Text-based Editing of Audio Narration | Two Minute Papers #167
The paper "VoCo: Text-based Insertion and Replacement in Audio Narration" is available here: http://gfx.cs.princeton.edu/pubs/Jin_2017_VTI/ Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1109588/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Car Oizro Mnefe Hair. Dear Fellow Scholars, this is Two Minute Papers with Car Oizro Mnefe Hair. Close enough. As you have probably noticed, today we are going to talk about text to speech or TTS in short. TTS means that we write a piece of text and a computer synthesized voice will read it aloud for us. This is really useful for reading the news or creating audiobooks that don't have any official voice overs. This work was done by researchers at Princeton University and Adobe and is about text-based audio narration editing. This one is going to be crazy good. The Adobe guys like to call this the photoshop of voice overs. In a normal situation, we have access to a waveform and if we wish to change anything in a voice over, we need to edit it. Editing waveforms by hand is extremely difficult. Traditional techniques often can't even reliably find the boundaries between words and letters let alone edit them. And with this technique, we can cut, copy and even edit this text and the waveforms will automatically be transformed appropriately using the same voice. Had it struck squarely, it would have killed him. Had it struck squarely, it would have saved him. We can even use new words that have never been uttered in the original narration. We leave the eventuality to time and law. We leave the eventuality to time and believe. It solves an optimization problem where the similarity, smoothness and the pace of the original footage is to be matched as closely as possible. One of the excellent new features is that we can even choose from several different voicings for the new word and insert the one that we deemed the most appropriate. For expert users, the pitch and duration is also editable. It's always important to have a look at a new technique and make sure that it works well in practice. But in science, this is only the first step. There has to be more proof that a new proposed method works well in a variety of cases. In this case, a theoretical proof by means of mathematics is not feasible, therefore a user study was carried out where listeners were shown, synthesized and real audio samples and had to blindly decide which was which. The algorithm was remarkably successful at deceiving the test subjects. Make sure to have a look at the paper in the description for more details. This technique is traditional in a sense that it doesn't use any sort of neural networks. However, there are great strides being made in that area as well, which I am quite excited to show you in future episodes. And due to some of these newer video and audio editing techniques, I expect that within the internet forums, fake news is going to be an enduring topic. I hope that in parallel with better and better text and video synthesis, there will be an arms race with other methods that are designed to identify these cases. A neural detective, if you will. And now, if you excuse me, I'll give this publicly available TTS one more try and see if I can retire from narrating videos. Thanks for watching and for your generous support, and I'll see you next time. Yup, exact same thing. Bet you didn't even notice it.
[{"start": 0.0, "end": 6.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Car Oizro Mnefe Hair."}, {"start": 6.0, "end": 11.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Car Oizro Mnefe Hair."}, {"start": 11.0, "end": 12.0, "text": " Close enough."}, {"start": 12.0, "end": 19.0, "text": " As you have probably noticed, today we are going to talk about text to speech or TTS in short."}, {"start": 19.0, "end": 26.0, "text": " TTS means that we write a piece of text and a computer synthesized voice will read it aloud for us."}, {"start": 26.0, "end": 32.0, "text": " This is really useful for reading the news or creating audiobooks that don't have any official voice overs."}, {"start": 32.0, "end": 40.0, "text": " This work was done by researchers at Princeton University and Adobe and is about text-based audio narration editing."}, {"start": 40.0, "end": 46.0, "text": " This one is going to be crazy good. The Adobe guys like to call this the photoshop of voice overs."}, {"start": 46.0, "end": 54.0, "text": " In a normal situation, we have access to a waveform and if we wish to change anything in a voice over, we need to edit it."}, {"start": 54.0, "end": 65.0, "text": " Editing waveforms by hand is extremely difficult. Traditional techniques often can't even reliably find the boundaries between words and letters let alone edit them."}, {"start": 65.0, "end": 76.0, "text": " And with this technique, we can cut, copy and even edit this text and the waveforms will automatically be transformed appropriately using the same voice."}, {"start": 76.0, "end": 87.0, "text": " Had it struck squarely, it would have killed him."}, {"start": 87.0, "end": 90.0, "text": " Had it struck squarely, it would have saved him."}, {"start": 90.0, "end": 96.0, "text": " We can even use new words that have never been uttered in the original narration."}, {"start": 96.0, "end": 102.0, "text": " We leave the eventuality to time and law."}, {"start": 102.0, "end": 126.0, "text": " We leave the eventuality to time and believe."}, {"start": 126.0, "end": 135.0, "text": " It solves an optimization problem where the similarity, smoothness and the pace of the original footage is to be matched as closely as possible."}, {"start": 135.0, "end": 145.0, "text": " One of the excellent new features is that we can even choose from several different voicings for the new word and insert the one that we deemed the most appropriate."}, {"start": 145.0, "end": 149.0, "text": " For expert users, the pitch and duration is also editable."}, {"start": 149.0, "end": 158.0, "text": " It's always important to have a look at a new technique and make sure that it works well in practice. But in science, this is only the first step."}, {"start": 158.0, "end": 163.0, "text": " There has to be more proof that a new proposed method works well in a variety of cases."}, {"start": 163.0, "end": 178.0, "text": " In this case, a theoretical proof by means of mathematics is not feasible, therefore a user study was carried out where listeners were shown, synthesized and real audio samples and had to blindly decide which was which."}, {"start": 178.0, "end": 182.0, "text": " The algorithm was remarkably successful at deceiving the test subjects."}, {"start": 182.0, "end": 185.0, "text": " Make sure to have a look at the paper in the description for more details."}, {"start": 185.0, "end": 198.0, "text": " This technique is traditional in a sense that it doesn't use any sort of neural networks. However, there are great strides being made in that area as well, which I am quite excited to show you in future episodes."}, {"start": 198.0, "end": 207.0, "text": " And due to some of these newer video and audio editing techniques, I expect that within the internet forums, fake news is going to be an enduring topic."}, {"start": 207.0, "end": 217.0, "text": " I hope that in parallel with better and better text and video synthesis, there will be an arms race with other methods that are designed to identify these cases."}, {"start": 217.0, "end": 219.0, "text": " A neural detective, if you will."}, {"start": 219.0, "end": 228.0, "text": " And now, if you excuse me, I'll give this publicly available TTS one more try and see if I can retire from narrating videos."}, {"start": 228.0, "end": 234.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}, {"start": 234.0, "end": 238.0, "text": " Yup, exact same thing. Bet you didn't even notice it."}]
Two Minute Papers
https://www.youtube.com/watch?v=oltKUPTBz9Q
Efficient Yarn-based Cloth Simulations | Two Minute Papers #166
The paper "Efficient Yarn-based Cloth with Adaptive Contact Linearization" is available here: https://www.cs.cornell.edu/projects/YarnCloth/ https://www.cs.cornell.edu/projects/YarnCloth/sg10_acl.pdf Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1142179/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolenei Fehr. This paper is about creating stunning cloth simulations that are rich in yarn-to-yarn contact. Normally, this is a challenging problem because finding and simulating all the possible contacts between tens of thousands of interlinked pieces of geometry is a prohibitively long process. Also, due to the many different kinds of possible loop configurations, these contacts can take an awful lot of different shapes which all need to be taken into consideration. Since we are so used to see these garments moving about in real life, if someone writes a simulator that is off just by a tiny bit will immediately spot the difference. I think it is now easy to see why this is a highly challenging problem. This technique optimizes this process by only computing some of the forces that emerge from these yarns pulling each other and only trying to approximate the rest. The good news is that this approximation is carried out with temporal coherence. This means that these contact models are retained through time and are only rebuilt when it is absolutely necessary. The regions marked with red in these simulations show the domains that are found to be undergoing significant deformation, therefore we need to focus most of our efforts in rebuilding the simulation model for these regions. Look at these results. This is unbelievable. There is so much detail in these simulations and all this was done seven years ago. In research and technology this is an eternity. This just blows my mind. The results are also compared against the expensive reference technique as well. And you can see that the differences are miniscule, but the new improved technique offers a 4-5 time speedup over that. For my research project I also run many of these simulations myself and many of these tasks take several all-nighters to compute. If someone would say that each of my all-nighters would now count as 5, I'd be absolutely delighted. If you haven't subscribed to the series, please make sure to do so and please also click the bell icon to never miss an episode. We have tons of awesome papers to come in the next few episodes. Looking forward to seeing you there. Thanks for watching and for your generous support. Bye.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolenei Fehr."}, {"start": 4.46, "end": 9.8, "text": " This paper is about creating stunning cloth simulations that are rich in yarn-to-yarn"}, {"start": 9.8, "end": 10.8, "text": " contact."}, {"start": 10.8, "end": 15.88, "text": " Normally, this is a challenging problem because finding and simulating all the possible"}, {"start": 15.88, "end": 22.8, "text": " contacts between tens of thousands of interlinked pieces of geometry is a prohibitively long process."}, {"start": 22.8, "end": 27.8, "text": " Also, due to the many different kinds of possible loop configurations, these contacts can"}, {"start": 27.8, "end": 32.88, "text": " take an awful lot of different shapes which all need to be taken into consideration."}, {"start": 32.88, "end": 37.72, "text": " Since we are so used to see these garments moving about in real life, if someone writes"}, {"start": 37.72, "end": 43.32, "text": " a simulator that is off just by a tiny bit will immediately spot the difference."}, {"start": 43.32, "end": 47.84, "text": " I think it is now easy to see why this is a highly challenging problem."}, {"start": 47.84, "end": 52.92, "text": " This technique optimizes this process by only computing some of the forces that emerge"}, {"start": 52.92, "end": 57.56, "text": " from these yarns pulling each other and only trying to approximate the rest."}, {"start": 57.56, "end": 62.2, "text": " The good news is that this approximation is carried out with temporal coherence."}, {"start": 62.2, "end": 67.28, "text": " This means that these contact models are retained through time and are only rebuilt when"}, {"start": 67.28, "end": 69.28, "text": " it is absolutely necessary."}, {"start": 69.28, "end": 74.08, "text": " The regions marked with red in these simulations show the domains that are found to be undergoing"}, {"start": 74.08, "end": 79.16, "text": " significant deformation, therefore we need to focus most of our efforts in rebuilding"}, {"start": 79.16, "end": 82.04, "text": " the simulation model for these regions."}, {"start": 82.04, "end": 83.24000000000001, "text": " Look at these results."}, {"start": 83.24000000000001, "end": 84.48, "text": " This is unbelievable."}, {"start": 84.48, "end": 90.32000000000001, "text": " There is so much detail in these simulations and all this was done seven years ago."}, {"start": 90.32000000000001, "end": 93.52000000000001, "text": " In research and technology this is an eternity."}, {"start": 93.52000000000001, "end": 95.32000000000001, "text": " This just blows my mind."}, {"start": 95.32000000000001, "end": 99.56, "text": " The results are also compared against the expensive reference technique as well."}, {"start": 99.56, "end": 104.76, "text": " And you can see that the differences are miniscule, but the new improved technique offers a"}, {"start": 104.76, "end": 107.52000000000001, "text": " 4-5 time speedup over that."}, {"start": 107.52000000000001, "end": 113.04, "text": " For my research project I also run many of these simulations myself and many of these tasks"}, {"start": 113.04, "end": 115.72000000000001, "text": " take several all-nighters to compute."}, {"start": 115.72000000000001, "end": 121.72, "text": " If someone would say that each of my all-nighters would now count as 5, I'd be absolutely delighted."}, {"start": 121.72, "end": 126.28, "text": " If you haven't subscribed to the series, please make sure to do so and please also click"}, {"start": 126.28, "end": 128.68, "text": " the bell icon to never miss an episode."}, {"start": 128.68, "end": 132.56, "text": " We have tons of awesome papers to come in the next few episodes."}, {"start": 132.56, "end": 134.12, "text": " Looking forward to seeing you there."}, {"start": 134.12, "end": 152.24, "text": " Thanks for watching and for your generous support."}, {"start": 152.24, "end": 170.3, "text": " Bye."}]
Two Minute Papers
https://www.youtube.com/watch?v=SauCsNkGr-E
Iridescent Light Simulations | Two Minute Papers #165
The paper "A Practical Extension to Microfacet Theory for the Modeling of Varying Iridescence" and its source code is available here: https://belcour.github.io/blog/research/2017/05/01/brdf-thin-film.html Additional reading: 1. http://www.care2.com/greenliving/amazing-iridescent-fruit-worlds-most-intense-color.html 2. https://academy.allaboutbirds.org/how-birds-make-colorful-feathers/ 3. http://www.cam.ac.uk/research/news/african-fruit-brightest-thing-in-nature-but-does-not-use-pigment-to-create-its-extraordinary-colour Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2139279/ Pollia condensata fruit image credit: Silvia Vignolini - http://www.cam.ac.uk/research/news/african-fruit-brightest-thing-in-nature-but-does-not-use-pigment-to-create-its-extraordinary-colour Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. From many fond childhood memories, most of us are quite fond of the colorful physical appearance of bubbles and fuel-water mixtures. We also know surprisingly little about this peculiar phenomenon, where the color of an object changes when we either turn our head or change the lighting. This happens in a very colorful manner and physicists like to call this iridescence organiocromism, what is even less known is that if we try to use a light simulation program to make an image with leather, we'll be surprised to see that it also shows a pronounced goniocromatic effect. An even more less known fact is that quite a few birds, insects, minerals, seashells, and even some fruits are iridescent as well. I've added links to some really cool additional readings to the video description for your enjoyment. This effect is caused by materials that scatter different colors of light in different directions. A white incoming light is therefore scattered not in one direction, but in a number of different directions sorted by their colors. This is why we get these beautiful rainbow colored patterns that we all love so much. Now that we know what iridescence is, the next step is obviously to infuse our light simulation programs to have this awesome feature. This paper is about simulating this effect with micro facets which are tiny microstructures on the surface of rough objects. And with this, it is now suddenly possible to put a thin iridescent film onto a virtual object and create a photorealistic image out of it. If you're into math and would like to read about some tasty spectral integration in the frequency spaced with Fourier transforms, this paper is for you. If you're not a mathematician, also make sure to have a look because the production quality of this paper is through the roof. The methodology, derivations, comparisons are all really crisp. Loving it. If you have a look, you will get the glimpse of what it takes to create a work of this quality. This is one of the best papers in photorealistic rendering I've seen in a while. In the meantime, I'm getting more and more messages from you fellow scholars who tell their stories on how they chose to turn their lives around and started studying science because of this series. Wow, that's incredibly humbling and I really don't know how to express my joy for this. I always say that it's so great to be a part of the future and I am delighted to see that some of you want to be a part of the future and not only as an observer but as a research scientist. This sort of impact is stronger than the absolute best case scenario I have ever dreamed of for the series. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.78, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.78, "end": 9.9, "text": " From many fond childhood memories, most of us are quite fond of the colorful physical"}, {"start": 9.9, "end": 13.280000000000001, "text": " appearance of bubbles and fuel-water mixtures."}, {"start": 13.280000000000001, "end": 17.8, "text": " We also know surprisingly little about this peculiar phenomenon, where the color of an"}, {"start": 17.8, "end": 22.6, "text": " object changes when we either turn our head or change the lighting."}, {"start": 22.6, "end": 29.2, "text": " This happens in a very colorful manner and physicists like to call this iridescence organiocromism,"}, {"start": 29.2, "end": 33.72, "text": " what is even less known is that if we try to use a light simulation program to make"}, {"start": 33.72, "end": 39.08, "text": " an image with leather, we'll be surprised to see that it also shows a pronounced goniocromatic"}, {"start": 39.08, "end": 40.08, "text": " effect."}, {"start": 40.08, "end": 46.36, "text": " An even more less known fact is that quite a few birds, insects, minerals, seashells, and"}, {"start": 46.36, "end": 49.16, "text": " even some fruits are iridescent as well."}, {"start": 49.16, "end": 54.44, "text": " I've added links to some really cool additional readings to the video description for your enjoyment."}, {"start": 54.44, "end": 60.36, "text": " This effect is caused by materials that scatter different colors of light in different directions."}, {"start": 60.36, "end": 65.75999999999999, "text": " A white incoming light is therefore scattered not in one direction, but in a number of different"}, {"start": 65.75999999999999, "end": 68.2, "text": " directions sorted by their colors."}, {"start": 68.2, "end": 73.52, "text": " This is why we get these beautiful rainbow colored patterns that we all love so much."}, {"start": 73.52, "end": 79.24, "text": " Now that we know what iridescence is, the next step is obviously to infuse our light simulation"}, {"start": 79.24, "end": 81.92, "text": " programs to have this awesome feature."}, {"start": 81.92, "end": 87.24000000000001, "text": " This paper is about simulating this effect with micro facets which are tiny microstructures"}, {"start": 87.24000000000001, "end": 89.64, "text": " on the surface of rough objects."}, {"start": 89.64, "end": 95.48, "text": " And with this, it is now suddenly possible to put a thin iridescent film onto a virtual"}, {"start": 95.48, "end": 98.96000000000001, "text": " object and create a photorealistic image out of it."}, {"start": 98.96000000000001, "end": 103.44, "text": " If you're into math and would like to read about some tasty spectral integration in"}, {"start": 103.44, "end": 107.72, "text": " the frequency spaced with Fourier transforms, this paper is for you."}, {"start": 107.72, "end": 111.96, "text": " If you're not a mathematician, also make sure to have a look because the production quality"}, {"start": 111.96, "end": 114.44, "text": " of this paper is through the roof."}, {"start": 114.44, "end": 119.44, "text": " The methodology, derivations, comparisons are all really crisp."}, {"start": 119.44, "end": 120.44, "text": " Loving it."}, {"start": 120.44, "end": 124.03999999999999, "text": " If you have a look, you will get the glimpse of what it takes to create a work of this"}, {"start": 124.03999999999999, "end": 125.03999999999999, "text": " quality."}, {"start": 125.03999999999999, "end": 129.24, "text": " This is one of the best papers in photorealistic rendering I've seen in a while."}, {"start": 129.24, "end": 133.48, "text": " In the meantime, I'm getting more and more messages from you fellow scholars who tell"}, {"start": 133.48, "end": 138.79999999999998, "text": " their stories on how they chose to turn their lives around and started studying science"}, {"start": 138.79999999999998, "end": 140.35999999999999, "text": " because of this series."}, {"start": 140.35999999999999, "end": 145.95999999999998, "text": " Wow, that's incredibly humbling and I really don't know how to express my joy for this."}, {"start": 145.95999999999998, "end": 150.67999999999998, "text": " I always say that it's so great to be a part of the future and I am delighted to see that"}, {"start": 150.67999999999998, "end": 155.51999999999998, "text": " some of you want to be a part of the future and not only as an observer but as a research"}, {"start": 155.51999999999998, "end": 156.51999999999998, "text": " scientist."}, {"start": 156.51999999999998, "end": 161.51999999999998, "text": " This sort of impact is stronger than the absolute best case scenario I have ever dreamed of"}, {"start": 161.51999999999998, "end": 162.72, "text": " for the series."}, {"start": 162.72, "end": 166.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=R5t74AC6I0A
Simulating Cuts On Virtual Bodies | Two Minute Papers #164
The paper "Robust eXtended Finite Elements for Complex Cutting of Deformables" is available here: https://www.animation.rwth-aachen.de/publication/0551/ https://animation.rwth-aachen.de/media/papers/2017-SIGGRAPH-XFEM.pdf Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-185456/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Fahir. This paper is about the absolute favorite thing of computer graphics researchers, destroying virtual objects in the most creative ways. This is the only place on earth where words like the formable bodies and cutting can be used in the same sentence and be delighted about it. This time around we are going to cut and dismember every virtual object that stands in our way, and then some more. In these animations we have complex 3D geometry and the objective is to change these geometries in a way that remains physically correct even in the presence of complex cut surfaces. When such a cut happens, traditional techniques typically delete and duplicate parts of the geometry close to the cut. This is a heavily simplified solution that leads to inaccurate results. Other techniques try to rebuild parts of the geometry that are affected by the cut. This is what computer graphics researchers like to call remashing, and it works quite well, but it takes ages to perform. Also, it still has drawbacks, for instance quantities like temperature and deformations also have to be transferred to the new geometry, which is non-trivial to execute properly. In this work, a new technique is proposed that is able to process really complex cuts without creating new geometry. No remashing takes place, but the mass and stiffness properties of the materials are retained correctly. Also, the fact that it minimizes the geometric processing overhead leads to a not only simpler, but a more efficient solution. There is so much visual detail in the results that I could watch this video 10 times and still find something new in there. There are also some horrifying game of thrones kind of experiments in this footage. Watch out! Ouch! The presentation of the results and the part of the video that compares against the previous technique is absolutely brilliant, you have to see it. The paper is also remarkably well written, make sure to have a look at that too. The link is available in the video description. I am really itching to make some longer videos where we can go into some of these derivations and build a strong intuitive understanding of them. That sounds like a ton of fun, and if this could ever become a full-time endeavor, I am more than enthused to start doing more and work on bonus videos like that. If you enjoyed this episode, don't forget to hit the like button and subscribe to the series. Normally it's up to YouTube to decide whether you get a notification or not, so make sure to click the bell icon as well to never miss a two minute paper's episode. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Fahir."}, {"start": 4.4, "end": 9.64, "text": " This paper is about the absolute favorite thing of computer graphics researchers, destroying"}, {"start": 9.64, "end": 12.96, "text": " virtual objects in the most creative ways."}, {"start": 12.96, "end": 18.16, "text": " This is the only place on earth where words like the formable bodies and cutting can be used"}, {"start": 18.16, "end": 21.400000000000002, "text": " in the same sentence and be delighted about it."}, {"start": 21.400000000000002, "end": 27.240000000000002, "text": " This time around we are going to cut and dismember every virtual object that stands in our way,"}, {"start": 27.240000000000002, "end": 28.560000000000002, "text": " and then some more."}, {"start": 28.56, "end": 34.04, "text": " In these animations we have complex 3D geometry and the objective is to change these geometries"}, {"start": 34.04, "end": 39.519999999999996, "text": " in a way that remains physically correct even in the presence of complex cut surfaces."}, {"start": 39.519999999999996, "end": 44.16, "text": " When such a cut happens, traditional techniques typically delete and duplicate parts of the"}, {"start": 44.16, "end": 46.32, "text": " geometry close to the cut."}, {"start": 46.32, "end": 50.84, "text": " This is a heavily simplified solution that leads to inaccurate results."}, {"start": 50.84, "end": 55.56, "text": " Other techniques try to rebuild parts of the geometry that are affected by the cut."}, {"start": 55.56, "end": 59.88, "text": " This is what computer graphics researchers like to call remashing, and it works quite"}, {"start": 59.88, "end": 62.400000000000006, "text": " well, but it takes ages to perform."}, {"start": 62.400000000000006, "end": 67.48, "text": " Also, it still has drawbacks, for instance quantities like temperature and deformations"}, {"start": 67.48, "end": 72.88, "text": " also have to be transferred to the new geometry, which is non-trivial to execute properly."}, {"start": 72.88, "end": 78.36, "text": " In this work, a new technique is proposed that is able to process really complex cuts without"}, {"start": 78.36, "end": 80.12, "text": " creating new geometry."}, {"start": 80.12, "end": 85.32000000000001, "text": " No remashing takes place, but the mass and stiffness properties of the materials are retained"}, {"start": 85.32, "end": 86.32, "text": " correctly."}, {"start": 86.32, "end": 92.0, "text": " Also, the fact that it minimizes the geometric processing overhead leads to a not only simpler,"}, {"start": 92.0, "end": 93.44, "text": " but a more efficient solution."}, {"start": 93.44, "end": 98.96, "text": " There is so much visual detail in the results that I could watch this video 10 times and"}, {"start": 98.96, "end": 101.03999999999999, "text": " still find something new in there."}, {"start": 101.03999999999999, "end": 107.88, "text": " There are also some horrifying game of thrones kind of experiments in this footage."}, {"start": 107.88, "end": 108.88, "text": " Watch out!"}, {"start": 108.88, "end": 111.28, "text": " Ouch!"}, {"start": 111.28, "end": 115.52, "text": " The presentation of the results and the part of the video that compares against the previous"}, {"start": 115.52, "end": 118.84, "text": " technique is absolutely brilliant, you have to see it."}, {"start": 118.84, "end": 123.04, "text": " The paper is also remarkably well written, make sure to have a look at that too."}, {"start": 123.04, "end": 125.24000000000001, "text": " The link is available in the video description."}, {"start": 125.24000000000001, "end": 130.32, "text": " I am really itching to make some longer videos where we can go into some of these derivations"}, {"start": 130.32, "end": 133.2, "text": " and build a strong intuitive understanding of them."}, {"start": 133.2, "end": 137.92000000000002, "text": " That sounds like a ton of fun, and if this could ever become a full-time endeavor, I am"}, {"start": 137.92, "end": 142.35999999999999, "text": " more than enthused to start doing more and work on bonus videos like that."}, {"start": 142.35999999999999, "end": 146.04, "text": " If you enjoyed this episode, don't forget to hit the like button and subscribe to the"}, {"start": 146.04, "end": 147.04, "text": " series."}, {"start": 147.04, "end": 151.07999999999998, "text": " Normally it's up to YouTube to decide whether you get a notification or not, so make"}, {"start": 151.07999999999998, "end": 155.39999999999998, "text": " sure to click the bell icon as well to never miss a two minute paper's episode."}, {"start": 155.4, "end": 175.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9bcbh2hC7Hw
DeepMind's AI Creates Images From Your Sentences | Two Minute Papers #163
The paper "Parallel Multiscale Autoregressive Density Estimation" is available here: https://arxiv.org/pdf/1703.03664.pdf Our Patreon page: https://www.patreon.com/TwoMinutePapers Scott Reed's results: https://twitter.com/scott_e_reed/status/841099231666544640 https://twitter.com/scott_e_reed/status/841098907887235076 The older work, PixelCNN is available here: https://arxiv.org/pdf/1606.05328.pdf Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1208035/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Katojol Naifahir. This is one of those new, absolutely insane papers from the Google DeepMind guys. You're going to see a follow-up work to an algorithm that looks at a bunch of images, and from that, it automatically learns the concept of birds, human faces, or coral reefs. So much so that we are able to write a new sentence, and it will generate a new, close to photorealistic image from this written description. This network is capable of creating images that are significantly different than the ones it has been trained on. This already sounds like science fiction. Completely unreal. This work goes by the name Pixels CNN. We'll discuss a follow-up work to that in a moment. The downside of this method is that these images are generated pixel by pixel, and many of these pixels depend on their neighborhood. For instance, if I start to draw one pixel of the beak of a bird, the neighboring pixels have to adhere to this constraint, and have to be the continuation of the beak. Clearly, these images have a lot of structure. This means that we cannot do this process in parallel, but create these new images one pixel at a time. This is an extremely slow and computationally expensive process, and hence, the original paper showed results with 32 by 32 and 64 by 64 images at most. As we process everything sequentially, the execution time of the algorithm scales linearly with the number of pixels we can generate. It is like a factory where there are a ton of assembly lines, but only one person to run around and operate all of them. Here, the goal was to start generating different regions of these images independently, but only in cases when these pixels are not strongly correlated. For instance, doing this with neighbors is an ogre. This is possible, but extremely challenging, and the paper contains details on how to select these pixels, and when we can pretend them to be independent. And now, feast your eyes upon these spectacular results. If we are looking for a yellow bird with a black head, orange eyes, and an orange bill, we are going to see much more detailed images. The complexity of the new algorithm scales with the number of pixels not linearly, but in a logarithmic manner, which is basically the equivalent of winning the jackpot in terms of parallelization, and it often results in a more than 100 times speed up. This is a factory that's not run by one guy, but one that works properly. The lead author, Scott Reed, has also published some more amazing results on Twitter as well. In these examples, we can see the evolution of the final image that is generated by the network. It is an amazing feeling to be a part of the future. And note that there is a ton of challenges with the idea. This is one of those typical cases when the idea is only the first step, and execution is king. Make sure to have a look at the paper for more details. According to our regular schedule, we try our best to put out two videos every week. That's eight episodes a month. If you feel that eight of these episodes is worth a dollar for you, please consider supporting us on Patreon. This way, we can create more elaborate episodes for you. The channel is growing at a remarkable rate, and your support has been absolutely amazing. I am honored to have an audience like you fellow scholars. We are quite close to hitting our next milestone, and this milestone will be about giving back more to the scientific community and empowering other research projects. I've put a link to our Patreon page with the details in the video description. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Katojol Naifahir."}, {"start": 4.28, "end": 9.58, "text": " This is one of those new, absolutely insane papers from the Google DeepMind guys."}, {"start": 9.58, "end": 14.34, "text": " You're going to see a follow-up work to an algorithm that looks at a bunch of images,"}, {"start": 14.34, "end": 20.46, "text": " and from that, it automatically learns the concept of birds, human faces, or coral reefs."}, {"start": 20.46, "end": 28.86, "text": " So much so that we are able to write a new sentence, and it will generate a new, close to photorealistic image from this written description."}, {"start": 28.86, "end": 34.94, "text": " This network is capable of creating images that are significantly different than the ones it has been trained on."}, {"start": 34.94, "end": 39.14, "text": " This already sounds like science fiction. Completely unreal."}, {"start": 39.14, "end": 44.7, "text": " This work goes by the name Pixels CNN. We'll discuss a follow-up work to that in a moment."}, {"start": 44.7, "end": 49.46, "text": " The downside of this method is that these images are generated pixel by pixel,"}, {"start": 49.46, "end": 52.58, "text": " and many of these pixels depend on their neighborhood."}, {"start": 52.58, "end": 62.099999999999994, "text": " For instance, if I start to draw one pixel of the beak of a bird, the neighboring pixels have to adhere to this constraint, and have to be the continuation of the beak."}, {"start": 62.099999999999994, "end": 64.9, "text": " Clearly, these images have a lot of structure."}, {"start": 64.9, "end": 71.1, "text": " This means that we cannot do this process in parallel, but create these new images one pixel at a time."}, {"start": 71.1, "end": 82.53999999999999, "text": " This is an extremely slow and computationally expensive process, and hence, the original paper showed results with 32 by 32 and 64 by 64 images at most."}, {"start": 82.54, "end": 90.06, "text": " As we process everything sequentially, the execution time of the algorithm scales linearly with the number of pixels we can generate."}, {"start": 90.06, "end": 97.46000000000001, "text": " It is like a factory where there are a ton of assembly lines, but only one person to run around and operate all of them."}, {"start": 97.46000000000001, "end": 106.86000000000001, "text": " Here, the goal was to start generating different regions of these images independently, but only in cases when these pixels are not strongly correlated."}, {"start": 106.86000000000001, "end": 109.9, "text": " For instance, doing this with neighbors is an ogre."}, {"start": 109.9, "end": 118.86000000000001, "text": " This is possible, but extremely challenging, and the paper contains details on how to select these pixels, and when we can pretend them to be independent."}, {"start": 118.86000000000001, "end": 122.78, "text": " And now, feast your eyes upon these spectacular results."}, {"start": 122.78, "end": 131.1, "text": " If we are looking for a yellow bird with a black head, orange eyes, and an orange bill, we are going to see much more detailed images."}, {"start": 131.1, "end": 145.66, "text": " The complexity of the new algorithm scales with the number of pixels not linearly, but in a logarithmic manner, which is basically the equivalent of winning the jackpot in terms of parallelization, and it often results in a more than 100 times speed up."}, {"start": 145.66, "end": 150.06, "text": " This is a factory that's not run by one guy, but one that works properly."}, {"start": 150.06, "end": 155.57999999999998, "text": " The lead author, Scott Reed, has also published some more amazing results on Twitter as well."}, {"start": 155.57999999999998, "end": 160.7, "text": " In these examples, we can see the evolution of the final image that is generated by the network."}, {"start": 160.7, "end": 163.82, "text": " It is an amazing feeling to be a part of the future."}, {"start": 163.82, "end": 166.85999999999999, "text": " And note that there is a ton of challenges with the idea."}, {"start": 166.85999999999999, "end": 173.1, "text": " This is one of those typical cases when the idea is only the first step, and execution is king."}, {"start": 173.1, "end": 175.89999999999998, "text": " Make sure to have a look at the paper for more details."}, {"start": 175.89999999999998, "end": 181.1, "text": " According to our regular schedule, we try our best to put out two videos every week."}, {"start": 181.1, "end": 182.85999999999999, "text": " That's eight episodes a month."}, {"start": 182.85999999999999, "end": 188.78, "text": " If you feel that eight of these episodes is worth a dollar for you, please consider supporting us on Patreon."}, {"start": 188.78, "end": 192.06, "text": " This way, we can create more elaborate episodes for you."}, {"start": 192.06, "end": 197.42000000000002, "text": " The channel is growing at a remarkable rate, and your support has been absolutely amazing."}, {"start": 197.42000000000002, "end": 200.62, "text": " I am honored to have an audience like you fellow scholars."}, {"start": 200.62, "end": 206.14, "text": " We are quite close to hitting our next milestone, and this milestone will be about giving back more"}, {"start": 206.14, "end": 210.14, "text": " to the scientific community and empowering other research projects."}, {"start": 210.14, "end": 214.06, "text": " I've put a link to our Patreon page with the details in the video description."}, {"start": 214.06, "end": 221.66, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wlAgyf_e-hA
Style Transfer For Fluid Simulations | Two Minute Papers #162
The paper "Stylized Keyframe Animation of Fluid Simulations" is available here: http://gfx.cs.princeton.edu/pubs/Browning_2014_SKA/index.php Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1330662/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizhou and Aifahir. We've seen a lot of fluid and smoke simulations throughout the series. In each of these cases, the objective was to maximize the realism of these animations often to the point where they are indistinguishable from reality. However, there are cases where creating photorealistic footage is not the main objective. Artists often seek to imbue these fluid and smoke simulations with their own distinctive style and this style needs not to be photorealistic. It can be cartoonish, black and white, or take a variety of different color schemes. But unfortunately, to obtain such an effect, we have to sit down, get a bunch of papers, and draw the entirety of the animation frame by frame. And of course, to accomplish this, we also need to be physicists and know the underlying laws of fluid dynamics. That's not only borderline impossible, but extremely laborious as well. It would be really cool to have an algorithm that is somehow able to learn our art style and apply it to a fluid or smoke simulation sequence. But the question is, how do we exactly specify this style? Have a look at this really cool technique. I love the idea behind it. First, we compute a classical smoke simulation, then we freeze a few frames and get the artist to colorize them. After that, the algorithm tries to propagate this artistic style to the entirety of the sequence. Intuitively, this is artistic style transfer for fluid animations, but without using any machine learning techniques. Here, we are doing patch-based regenerative morphing. This awesome term refers to a technique that is trying to understand the direction of flows and that vac the colored regions according to it in a way that is both visually and temporarily coherent. Visually coherent means that it looks as close to plausible as we can make it, and temporarily coherent means that we are not looking only at one frame, but a sequence of frames, and the movement through these neighboring frames has to be smooth and consistent. These animation sequences were created from 8 to 9 colorized frames, and whatever you see happening in between was filled in by the algorithm. And again, we are talking about the artistic style here, not the simulation itself. A fine, handcrafted work in the world dominated by advanced learning algorithms. This paper is a bit like a beautiful handmade automatic timepiece in the era of quartz watches. If you enjoyed this episode, make sure to leave a like on the video, and don't forget to subscribe to get a glimpse of the future on the channel twice a week. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizhou and Aifahir."}, {"start": 4.36, "end": 8.4, "text": " We've seen a lot of fluid and smoke simulations throughout the series."}, {"start": 8.4, "end": 13.46, "text": " In each of these cases, the objective was to maximize the realism of these animations"}, {"start": 13.46, "end": 17.1, "text": " often to the point where they are indistinguishable from reality."}, {"start": 17.1, "end": 22.34, "text": " However, there are cases where creating photorealistic footage is not the main objective."}, {"start": 22.34, "end": 27.98, "text": " Artists often seek to imbue these fluid and smoke simulations with their own distinctive style"}, {"start": 27.98, "end": 30.7, "text": " and this style needs not to be photorealistic."}, {"start": 30.7, "end": 35.84, "text": " It can be cartoonish, black and white, or take a variety of different color schemes."}, {"start": 35.84, "end": 40.980000000000004, "text": " But unfortunately, to obtain such an effect, we have to sit down, get a bunch of papers,"}, {"start": 40.980000000000004, "end": 44.879999999999995, "text": " and draw the entirety of the animation frame by frame."}, {"start": 44.879999999999995, "end": 51.46, "text": " And of course, to accomplish this, we also need to be physicists and know the underlying laws of fluid dynamics."}, {"start": 51.46, "end": 55.72, "text": " That's not only borderline impossible, but extremely laborious as well."}, {"start": 55.72, "end": 60.98, "text": " It would be really cool to have an algorithm that is somehow able to learn our art style"}, {"start": 60.98, "end": 64.42, "text": " and apply it to a fluid or smoke simulation sequence."}, {"start": 64.42, "end": 68.02, "text": " But the question is, how do we exactly specify this style?"}, {"start": 68.02, "end": 71.96000000000001, "text": " Have a look at this really cool technique. I love the idea behind it."}, {"start": 71.96000000000001, "end": 79.46000000000001, "text": " First, we compute a classical smoke simulation, then we freeze a few frames and get the artist to colorize them."}, {"start": 79.46, "end": 85.61999999999999, "text": " After that, the algorithm tries to propagate this artistic style to the entirety of the sequence."}, {"start": 85.61999999999999, "end": 90.05999999999999, "text": " Intuitively, this is artistic style transfer for fluid animations,"}, {"start": 90.05999999999999, "end": 93.25999999999999, "text": " but without using any machine learning techniques."}, {"start": 93.25999999999999, "end": 96.86, "text": " Here, we are doing patch-based regenerative morphing."}, {"start": 96.86, "end": 102.33999999999999, "text": " This awesome term refers to a technique that is trying to understand the direction of flows"}, {"start": 102.33999999999999, "end": 109.3, "text": " and that vac the colored regions according to it in a way that is both visually and temporarily coherent."}, {"start": 109.3, "end": 114.22, "text": " Visually coherent means that it looks as close to plausible as we can make it,"}, {"start": 114.22, "end": 120.14, "text": " and temporarily coherent means that we are not looking only at one frame, but a sequence of frames,"}, {"start": 120.14, "end": 124.86, "text": " and the movement through these neighboring frames has to be smooth and consistent."}, {"start": 124.86, "end": 129.3, "text": " These animation sequences were created from 8 to 9 colorized frames,"}, {"start": 129.3, "end": 133.7, "text": " and whatever you see happening in between was filled in by the algorithm."}, {"start": 133.7, "end": 138.57999999999998, "text": " And again, we are talking about the artistic style here, not the simulation itself."}, {"start": 138.58, "end": 144.18, "text": " A fine, handcrafted work in the world dominated by advanced learning algorithms."}, {"start": 144.18, "end": 150.66000000000003, "text": " This paper is a bit like a beautiful handmade automatic timepiece in the era of quartz watches."}, {"start": 150.66000000000003, "end": 153.94, "text": " If you enjoyed this episode, make sure to leave a like on the video,"}, {"start": 153.94, "end": 158.26000000000002, "text": " and don't forget to subscribe to get a glimpse of the future on the channel twice a week."}, {"start": 158.26, "end": 171.62, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Fevg4aowNyc
AI Learns To Create User Interfaces (pix2code) | Two Minute Papers #161
The paper "pix2code: Generating Code from a Graphical User Interface Screenshot" is available here: https://arxiv.org/abs/1705.07962 https://github.com/tonybeltramelli/pix2code Recommended for you: Recurrent Neural Network Writes Music and Shakespeare Novels - https://www.youtube.com/watch?v=Jkkjy7dVdaY Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credits: https://pixabay.com/photo-583839/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolnai-Fahir. Creating applications for mobile Android and iOS devices is a laborious endeavor, which most of the time includes creating a graphical user interface. These are the shiny, front-end interfaces that enable the user to interact with the back-end of our applications. So what about an algorithm that learns how to create these graphical user interfaces and automates part of this process? This piece of work takes one single input image that we can trivially obtain by making a screenshot of the user interface, and it almost immediately provides us with the code that is required to recreate it. What an amazing idea! The algorithm supports several different target platforms. For instance, it can give us code for iOS and Android devices. This code we can hand over to a compiler, which will create an executable application. This technique also supports HTML as well for creating websites with a desired user interface. Under the hood, a domain-specific language is being learned, and using this, it is possible to have a concise text representation of a user interface. Note that by no means the only use of domain-specific languages. The image of the graphical user interface is learned by a classical convolutional neural network, and this text representation is learned by a technique machine learning researchers like to call long short-term memory. LSTM in short. This is a neural network variant that is able to learn sequences of data and is typically used for language translation, music composition, or learning all the novels of Shakespeare, and writing new ones in his style. If you are wondering why these examples are suspiciously specific, we've had an earlier episode about this, I've put a link to it in the video description. Make sure to have a look, you're going to love it. Also, this year it will have its 20th year anniversary. Live long and prosper, little LSTM. Now I already see the forums go up in flames, sweeping generalizations, far-eaching statements on front-end developers around the world getting fired and all that. I start out by saying that I highly doubt that this work would mean the end of front-end development jobs in the industry. However, what I do think is that with a few improvements, it can quickly prove its worth by augmenting human labor and cutting down the costs of implementing graphical user interfaces in the future. This is another testament to the variety of tasks modern machine learning algorithms can take care of. The author also has a GitHub repository with a few more clarifications stating that the source code of the project and the dataset will be available soon. Tinkerers rejoice. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolnai-Fahir."}, {"start": 4.32, "end": 9.76, "text": " Creating applications for mobile Android and iOS devices is a laborious endeavor, which"}, {"start": 9.76, "end": 14.02, "text": " most of the time includes creating a graphical user interface."}, {"start": 14.02, "end": 18.52, "text": " These are the shiny, front-end interfaces that enable the user to interact with the"}, {"start": 18.52, "end": 20.52, "text": " back-end of our applications."}, {"start": 20.52, "end": 25.560000000000002, "text": " So what about an algorithm that learns how to create these graphical user interfaces"}, {"start": 25.560000000000002, "end": 28.48, "text": " and automates part of this process?"}, {"start": 28.48, "end": 33.36, "text": " This piece of work takes one single input image that we can trivially obtain by making"}, {"start": 33.36, "end": 38.480000000000004, "text": " a screenshot of the user interface, and it almost immediately provides us with the code"}, {"start": 38.480000000000004, "end": 40.9, "text": " that is required to recreate it."}, {"start": 40.9, "end": 42.6, "text": " What an amazing idea!"}, {"start": 42.6, "end": 46.04, "text": " The algorithm supports several different target platforms."}, {"start": 46.04, "end": 50.2, "text": " For instance, it can give us code for iOS and Android devices."}, {"start": 50.2, "end": 55.16, "text": " This code we can hand over to a compiler, which will create an executable application."}, {"start": 55.16, "end": 61.239999999999995, "text": " This technique also supports HTML as well for creating websites with a desired user interface."}, {"start": 61.239999999999995, "end": 66.12, "text": " Under the hood, a domain-specific language is being learned, and using this, it is possible"}, {"start": 66.12, "end": 70.16, "text": " to have a concise text representation of a user interface."}, {"start": 70.16, "end": 74.4, "text": " Note that by no means the only use of domain-specific languages."}, {"start": 74.4, "end": 79.32, "text": " The image of the graphical user interface is learned by a classical convolutional neural"}, {"start": 79.32, "end": 84.32, "text": " network, and this text representation is learned by a technique machine learning researchers"}, {"start": 84.32, "end": 87.03999999999999, "text": " like to call long short-term memory."}, {"start": 87.03999999999999, "end": 88.39999999999999, "text": " LSTM in short."}, {"start": 88.39999999999999, "end": 93.19999999999999, "text": " This is a neural network variant that is able to learn sequences of data and is typically"}, {"start": 93.19999999999999, "end": 98.96, "text": " used for language translation, music composition, or learning all the novels of Shakespeare,"}, {"start": 98.96, "end": 101.08, "text": " and writing new ones in his style."}, {"start": 101.08, "end": 105.88, "text": " If you are wondering why these examples are suspiciously specific, we've had an earlier"}, {"start": 105.88, "end": 109.44, "text": " episode about this, I've put a link to it in the video description."}, {"start": 109.44, "end": 111.8, "text": " Make sure to have a look, you're going to love it."}, {"start": 111.8, "end": 115.67999999999999, "text": " Also, this year it will have its 20th year anniversary."}, {"start": 115.67999999999999, "end": 118.28, "text": " Live long and prosper, little LSTM."}, {"start": 118.28, "end": 124.96, "text": " Now I already see the forums go up in flames, sweeping generalizations, far-eaching statements"}, {"start": 124.96, "end": 128.92, "text": " on front-end developers around the world getting fired and all that."}, {"start": 128.92, "end": 134.12, "text": " I start out by saying that I highly doubt that this work would mean the end of front-end"}, {"start": 134.12, "end": 136.2, "text": " development jobs in the industry."}, {"start": 136.2, "end": 141.2, "text": " However, what I do think is that with a few improvements, it can quickly prove its worth"}, {"start": 141.2, "end": 146.07999999999998, "text": " by augmenting human labor and cutting down the costs of implementing graphical user"}, {"start": 146.07999999999998, "end": 147.79999999999998, "text": " interfaces in the future."}, {"start": 147.79999999999998, "end": 152.64, "text": " This is another testament to the variety of tasks modern machine learning algorithms"}, {"start": 152.64, "end": 154.04, "text": " can take care of."}, {"start": 154.04, "end": 159.0, "text": " The author also has a GitHub repository with a few more clarifications stating that the"}, {"start": 159.0, "end": 162.67999999999998, "text": " source code of the project and the dataset will be available soon."}, {"start": 162.67999999999998, "end": 164.0, "text": " Tinkerers rejoice."}, {"start": 164.0, "end": 179.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=4Df_BluxwkU
Simulating Wet Sand | Two Minute Papers #160
The paper "Multi-species simulation of porous sand and water mixtures" is available here: http://web.cs.ucla.edu/~cffjiang/research/wetsand/wetsand_siggraph17.pdf Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ If you're looking for some additional amusement: 1. An even slower motion version of the main scene: https://twitter.com/karoly_zsolnai/status/872497135287140353 2. Watch the citation ("Source: [...]") at the bottom left throughout the video. WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-192988/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karojola Ifehir. After around 160 episodes into Two Minute Papers, I think it is no secret to anyone that I am helplessly addicted to fluid simulations, so you can already guess what this episode is going to be about. I bet you will be as spellbound by this beautiful footage of wet sand simulations as I was when I first seen it. Before you ask, yes, I have attempted to prepare some slow motion action too. As you remember, simulating the motion of fluids involves solving equations that tell us how the velocity and the pressure evolves in time. Now the 3D world we live in is a continuum and we cannot solve these quantities everywhere because that would take an infinite amount of time. To alleviate this, we can put a grid in our virtual world and obtain these quantities only in these grid points. The higher the resolution the grid is, the more realistic the animations are, but the computation time also scales quite poorly. It is really not a surprise that we have barely seen any wet sand simulations in the visual effects industry so far. Here we have an efficient algorithm to handle these cases and as you will see, this is not only extremely expensive to compute, but nasty stability issues also arise. Have a look at this example here. These are sand simulations with different cohesion values. The means the strength of intermolecular forces that hold the material together. The higher cohesion is, the harder it is to break the sand up, the bigger the clumps are. This is an important quantity for our simulation because the higher the water saturation of this block of sand, the more cohesive it is. Now if we try to simulate this effect with traditional techniques on a coarse grid, will encounter a weird phenomenon. Namely, the longer our simulation runs, the larger the volume of the sand becomes. An excellent way to demonstrate this phenomenon is using these R-glasses, where you can clearly see that after only a good couple turns, the amount of sand within is significantly increased. This is particularly interesting because normally in classical fluid simulations, if our grid resolution is insufficient, we typically encounter water volume dissipation, which means that the total amount of mass in the simulation decreases over time. Here, we have the exact opposite, like in a magic trick, after every turn, the volume gets inflated. That's a really peculiar and no less challenging problem. This issue can be alleviated by using a finer grid, which is, as we know, extremely costly to compute, or the authors propose the volume fixing method to take care of this without significantly increasing the execution time of the algorithm. Make sure to have a look at the paper, which is certainly my kind of paper. Lots of beautiful physics and a study on how to solve these equations so that we can obtain an efficient wet sand simulator. And also, don't forget, a fluid paper a day keeps the obsessions away. In the meantime, a word about the two minute paper shirts. I am always delighted to see you fellow scholars sending over photos of yourselves, proudly posing with your newly obtained shirts for the series. Thanks so much and please keep them coming. They are available through two minute papers.com for the US and the EU and Worldwide link is also available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karojola Ifehir."}, {"start": 5.0, "end": 11.44, "text": " After around 160 episodes into Two Minute Papers, I think it is no secret to anyone that I am"}, {"start": 11.44, "end": 16.44, "text": " helplessly addicted to fluid simulations, so you can already guess what this episode is"}, {"start": 16.44, "end": 17.44, "text": " going to be about."}, {"start": 17.44, "end": 23.68, "text": " I bet you will be as spellbound by this beautiful footage of wet sand simulations as I was when"}, {"start": 23.68, "end": 25.080000000000002, "text": " I first seen it."}, {"start": 25.080000000000002, "end": 29.76, "text": " Before you ask, yes, I have attempted to prepare some slow motion action too."}, {"start": 29.76, "end": 34.84, "text": " As you remember, simulating the motion of fluids involves solving equations that tell us how"}, {"start": 34.84, "end": 38.36, "text": " the velocity and the pressure evolves in time."}, {"start": 38.36, "end": 43.760000000000005, "text": " Now the 3D world we live in is a continuum and we cannot solve these quantities everywhere"}, {"start": 43.760000000000005, "end": 46.400000000000006, "text": " because that would take an infinite amount of time."}, {"start": 46.400000000000006, "end": 51.44, "text": " To alleviate this, we can put a grid in our virtual world and obtain these quantities"}, {"start": 51.44, "end": 53.56, "text": " only in these grid points."}, {"start": 53.56, "end": 57.72, "text": " The higher the resolution the grid is, the more realistic the animations are, but the"}, {"start": 57.72, "end": 60.839999999999996, "text": " computation time also scales quite poorly."}, {"start": 60.839999999999996, "end": 65.84, "text": " It is really not a surprise that we have barely seen any wet sand simulations in the visual"}, {"start": 65.84, "end": 67.84, "text": " effects industry so far."}, {"start": 67.84, "end": 72.6, "text": " Here we have an efficient algorithm to handle these cases and as you will see, this is not"}, {"start": 72.6, "end": 78.28, "text": " only extremely expensive to compute, but nasty stability issues also arise."}, {"start": 78.28, "end": 80.03999999999999, "text": " Have a look at this example here."}, {"start": 80.03999999999999, "end": 83.88, "text": " These are sand simulations with different cohesion values."}, {"start": 83.88, "end": 89.08, "text": " The means the strength of intermolecular forces that hold the material together."}, {"start": 89.08, "end": 94.36, "text": " The higher cohesion is, the harder it is to break the sand up, the bigger the clumps are."}, {"start": 94.36, "end": 98.96, "text": " This is an important quantity for our simulation because the higher the water saturation of this"}, {"start": 98.96, "end": 101.91999999999999, "text": " block of sand, the more cohesive it is."}, {"start": 101.91999999999999, "end": 106.52, "text": " Now if we try to simulate this effect with traditional techniques on a coarse grid,"}, {"start": 106.52, "end": 109.0, "text": " will encounter a weird phenomenon."}, {"start": 109.0, "end": 114.32, "text": " Namely, the longer our simulation runs, the larger the volume of the sand becomes."}, {"start": 114.32, "end": 119.28, "text": " An excellent way to demonstrate this phenomenon is using these R-glasses, where you can clearly"}, {"start": 119.28, "end": 125.56, "text": " see that after only a good couple turns, the amount of sand within is significantly increased."}, {"start": 125.56, "end": 130.32, "text": " This is particularly interesting because normally in classical fluid simulations, if our grid"}, {"start": 130.32, "end": 136.0, "text": " resolution is insufficient, we typically encounter water volume dissipation, which means that"}, {"start": 136.0, "end": 140.04, "text": " the total amount of mass in the simulation decreases over time."}, {"start": 140.04, "end": 145.74, "text": " Here, we have the exact opposite, like in a magic trick, after every turn, the volume gets"}, {"start": 145.74, "end": 146.74, "text": " inflated."}, {"start": 146.74, "end": 150.28, "text": " That's a really peculiar and no less challenging problem."}, {"start": 150.28, "end": 155.96, "text": " This issue can be alleviated by using a finer grid, which is, as we know, extremely costly"}, {"start": 155.96, "end": 161.36, "text": " to compute, or the authors propose the volume fixing method to take care of this without"}, {"start": 161.36, "end": 164.88, "text": " significantly increasing the execution time of the algorithm."}, {"start": 164.88, "end": 168.88, "text": " Make sure to have a look at the paper, which is certainly my kind of paper."}, {"start": 168.88, "end": 173.96, "text": " Lots of beautiful physics and a study on how to solve these equations so that we can obtain"}, {"start": 173.96, "end": 176.2, "text": " an efficient wet sand simulator."}, {"start": 176.2, "end": 180.07999999999998, "text": " And also, don't forget, a fluid paper a day keeps the obsessions away."}, {"start": 180.07999999999998, "end": 183.07999999999998, "text": " In the meantime, a word about the two minute paper shirts."}, {"start": 183.07999999999998, "end": 188.44, "text": " I am always delighted to see you fellow scholars sending over photos of yourselves, proudly"}, {"start": 188.44, "end": 191.51999999999998, "text": " posing with your newly obtained shirts for the series."}, {"start": 191.51999999999998, "end": 194.32, "text": " Thanks so much and please keep them coming."}, {"start": 194.32, "end": 199.56, "text": " They are available through two minute papers.com for the US and the EU and Worldwide link"}, {"start": 199.56, "end": 201.76, "text": " is also available in the video description."}, {"start": 201.76, "end": 224.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UjuBLS15JqM
Algorithmic Beautification of Selfies | Two Minute Papers #159
The paper "Perspective-aware Manipulation of Portrait Photos" and its demo is available here: http://gfx.cs.princeton.edu/pubs/Fried_2016_PMO/index.php http://faces.cs.princeton.edu/ Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-465563/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kato Ejolene Fahir. Today we are going to talk about a rigorous scientific topic, none other than the creation of the perfect selfie photo. By definition, selfies are made by us, which means that these are typically short-range photos, and due to the perspective distortion of the camera lens, we often experience unpleasant effects like the heavy magnification of the nose and the forehead. To get this, this technique enables us to take a photo, and after that, edit the perceived camera distance for it without changing anything else. Basically, algorithmic beautification. This technique works the following way. We analyze the photo and try to figure out how distant the camera was when the photo was taken. Then, we create a digital model of the perspective camera and create a 3D model of the face. This is a process that mathematicians like to call fitting. It means that if we know the optics of perspective cameras, we can work backwards from the input photo that we have and find an appropriate setup that would result in this photo. Then we will be able to adjust this distance to even out the camera lens distortions. But that's not all, because as we have the digital 3D model of the face, we can do even more. For instance, we can also rotate it around in multiple directions. To build such a 3D model, we typically try to locate several well recognizable hotspots on the face, such as the chin, eyebrows, nose stem, the region under the nose, eyes, and lips. However, as these hotspots lead to a poor 3D representation of the human face, the authors added a few more of these hotspots to the detection process. This still takes less than 5 seconds. Earlier, we also talked about a neural network based technique that judged our selfie photos by assigning a score to them. I would absolutely love to see how that work would react to a before and after photo that comes from this technique. This way, we can formulate this score as a maximization problem, and as a result, we could have an automated technique that truly creates the perfect selfie photo through these warping operations. The best kind of evaluation is when we let reality be our judge and use images that were taken closer or farther away and compare the output of this technique against them. These true images bear the ground truth label throughout this video. The differences are often barely perceptible and to provide a better localization of the error, some different images are shown in the paper. If you are into stereoscopy, there's also an entire section about that as well. The authors also uploaded an interactive version of their work online that anyone can try free of charge. So as always, your scholarly before and after selfie experiments are more than welcome in the comment section. Whether you're already subscribed to the series or just subscribing now, which you should absolutely do, make sure to click the bell icon to never miss an episode. We have lots of amazing works coming up in the next few videos. Hope to see you there again. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Ejolene Fahir."}, {"start": 4.5600000000000005, "end": 10.38, "text": " Today we are going to talk about a rigorous scientific topic, none other than the creation"}, {"start": 10.38, "end": 12.46, "text": " of the perfect selfie photo."}, {"start": 12.46, "end": 17.32, "text": " By definition, selfies are made by us, which means that these are typically short-range"}, {"start": 17.32, "end": 22.96, "text": " photos, and due to the perspective distortion of the camera lens, we often experience unpleasant"}, {"start": 22.96, "end": 27.2, "text": " effects like the heavy magnification of the nose and the forehead."}, {"start": 27.2, "end": 32.839999999999996, "text": " To get this, this technique enables us to take a photo, and after that, edit the perceived"}, {"start": 32.839999999999996, "end": 36.4, "text": " camera distance for it without changing anything else."}, {"start": 36.4, "end": 39.480000000000004, "text": " Basically, algorithmic beautification."}, {"start": 39.480000000000004, "end": 41.56, "text": " This technique works the following way."}, {"start": 41.56, "end": 46.96, "text": " We analyze the photo and try to figure out how distant the camera was when the photo"}, {"start": 46.96, "end": 47.96, "text": " was taken."}, {"start": 47.96, "end": 53.879999999999995, "text": " Then, we create a digital model of the perspective camera and create a 3D model of the face."}, {"start": 53.88, "end": 57.440000000000005, "text": " This is a process that mathematicians like to call fitting."}, {"start": 57.440000000000005, "end": 62.68000000000001, "text": " It means that if we know the optics of perspective cameras, we can work backwards from the input"}, {"start": 62.68000000000001, "end": 68.16, "text": " photo that we have and find an appropriate setup that would result in this photo."}, {"start": 68.16, "end": 73.2, "text": " Then we will be able to adjust this distance to even out the camera lens distortions."}, {"start": 73.2, "end": 78.24000000000001, "text": " But that's not all, because as we have the digital 3D model of the face, we can do even"}, {"start": 78.24000000000001, "end": 79.24000000000001, "text": " more."}, {"start": 79.24000000000001, "end": 83.0, "text": " For instance, we can also rotate it around in multiple directions."}, {"start": 83.0, "end": 88.48, "text": " To build such a 3D model, we typically try to locate several well recognizable hotspots"}, {"start": 88.48, "end": 94.68, "text": " on the face, such as the chin, eyebrows, nose stem, the region under the nose, eyes,"}, {"start": 94.68, "end": 95.68, "text": " and lips."}, {"start": 95.68, "end": 101.32, "text": " However, as these hotspots lead to a poor 3D representation of the human face, the authors"}, {"start": 101.32, "end": 104.96000000000001, "text": " added a few more of these hotspots to the detection process."}, {"start": 104.96000000000001, "end": 107.84, "text": " This still takes less than 5 seconds."}, {"start": 107.84, "end": 113.04, "text": " Earlier, we also talked about a neural network based technique that judged our selfie photos"}, {"start": 113.04, "end": 115.04, "text": " by assigning a score to them."}, {"start": 115.04, "end": 121.24000000000001, "text": " I would absolutely love to see how that work would react to a before and after photo that"}, {"start": 121.24000000000001, "end": 123.0, "text": " comes from this technique."}, {"start": 123.0, "end": 127.72, "text": " This way, we can formulate this score as a maximization problem, and as a result, we"}, {"start": 127.72, "end": 132.56, "text": " could have an automated technique that truly creates the perfect selfie photo through"}, {"start": 132.56, "end": 134.36, "text": " these warping operations."}, {"start": 134.36, "end": 139.68, "text": " The best kind of evaluation is when we let reality be our judge and use images that were"}, {"start": 139.68, "end": 145.4, "text": " taken closer or farther away and compare the output of this technique against them."}, {"start": 145.4, "end": 149.16000000000003, "text": " These true images bear the ground truth label throughout this video."}, {"start": 149.16000000000003, "end": 154.12, "text": " The differences are often barely perceptible and to provide a better localization of the"}, {"start": 154.12, "end": 157.08, "text": " error, some different images are shown in the paper."}, {"start": 157.08, "end": 161.76000000000002, "text": " If you are into stereoscopy, there's also an entire section about that as well."}, {"start": 161.76, "end": 167.04, "text": " The authors also uploaded an interactive version of their work online that anyone can try"}, {"start": 167.04, "end": 168.32, "text": " free of charge."}, {"start": 168.32, "end": 173.35999999999999, "text": " So as always, your scholarly before and after selfie experiments are more than welcome in"}, {"start": 173.35999999999999, "end": 174.79999999999998, "text": " the comment section."}, {"start": 174.79999999999998, "end": 179.28, "text": " Whether you're already subscribed to the series or just subscribing now, which you should"}, {"start": 179.28, "end": 183.88, "text": " absolutely do, make sure to click the bell icon to never miss an episode."}, {"start": 183.88, "end": 187.51999999999998, "text": " We have lots of amazing works coming up in the next few videos."}, {"start": 187.51999999999998, "end": 188.95999999999998, "text": " Hope to see you there again."}, {"start": 188.96, "end": 192.8, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZEjUqZU1hNQ
Simulating Honey Coiling | Two Minute Papers #158
The paper "Variational Stokes: A Unified Pressure-Viscosity Solver for Accurate Viscous Liquids" is available here: https://cs.uwaterloo.ca/~elariono/stokes/index.html Recommended for you: Simulating Viscosity and Melting Fluids - https://www.youtube.com/watch?v=KgIrnR2O8KQ Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1006972/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear fellow scholars, this is two-minute papers with Karojona Ifehir. This episode is about simulating a beautiful phenomenon in nature, the buckling and coiling effect of honey. Hmmmm. This effect is due to the high viscosity of materials like honey, which means that they are highly resistant against deformation. Water, however, is much less viscous as it is held together by weaker intermolecular forces, therefore it is easier to deform, making it so easy to pour it into a glass. We had an earlier episode on honey buckling, and as every season fellow scholar already knows, the link is available in the video description. One key difference of this work is that the older solution was built upon a Lagrangian approach, which means that the simulation consists of computing the velocities and the pressure that acts on these particles. It is a particle-based simulation. Here, a solution is proposed for the Eulerian approach, which means that we do not compute these quantities everywhere in the continuum of space, but we use a fine 3D grid and we compute these quantities only in these grid points. No particles to be seen anywhere. There are mathematical techniques to try to guess what happens between these individual grid points and this process is referred to as interpolation. So normally, in this grid-based approach, if we wish to simulate such a buckling effect will be sorely disappointed, because what we will see is that the surface details rapidly disappear due to the inaccuracies in the simulation. The reason for this is that the classical grid-based simulators utilize a technique that mathematicians like to call operator splitting. This means that we solve these fluid equations by taking care of advection, pressure, and viscosity separately. These are the required quantities, separate solutions. This is great because it eases the computational complexity of the problem, however, we have to pay a price for it in the form of newly introduced inaccuracies. For instance, some kinetic and shear forces are significantly dampened, which leads to a loss of detail for buckling effects with traditional techniques. This paper introduces a new way of efficiently solving these operators together in a way that these coupling effects are retained in the simulation. The final solution not only looks stable, but is mathematically proven to work well for a variety of cases, and it also takes into consideration collisions with other solid objects correctly. I absolutely love this, and anyone who is in the middle of creating a new movie with some fluid action going on has to be all over this new technique. And the paper is absolutely amazing. It contains crystal clear writing, many paragraphs are so tight that I'd find it almost impossible to cut even one word from them, yet it is still digestible and absolutely beautifully written. Make sure to have a look, as always, the link is available in the video description. These amazing papers are stories that need to be told to everyone, not only to experts to everyone, and before creating these videos I always try my best to be in contact with the authors of these works. And nowadays many of them are telling me that they were really surprised by the influx of views they got after they were showcased in the series. Writing papers that are featured in two-minute papers takes a ridiculous amount of hard work, and after that the researchers make them available for everyone free of charge. And now I am so glad to see them get more and more recognition for their hard work. Absolutely amazing. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear fellow scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.5200000000000005, "end": 10.040000000000001, "text": " This episode is about simulating a beautiful phenomenon in nature, the buckling and coiling"}, {"start": 10.040000000000001, "end": 12.0, "text": " effect of honey."}, {"start": 12.0, "end": 13.0, "text": " Hmmmm."}, {"start": 13.0, "end": 17.44, "text": " This effect is due to the high viscosity of materials like honey, which means that they"}, {"start": 17.44, "end": 19.88, "text": " are highly resistant against deformation."}, {"start": 19.88, "end": 25.2, "text": " Water, however, is much less viscous as it is held together by weaker intermolecular"}, {"start": 25.2, "end": 31.6, "text": " forces, therefore it is easier to deform, making it so easy to pour it into a glass."}, {"start": 31.6, "end": 36.24, "text": " We had an earlier episode on honey buckling, and as every season fellow scholar already"}, {"start": 36.24, "end": 39.84, "text": " knows, the link is available in the video description."}, {"start": 39.84, "end": 45.08, "text": " One key difference of this work is that the older solution was built upon a Lagrangian"}, {"start": 45.08, "end": 50.519999999999996, "text": " approach, which means that the simulation consists of computing the velocities and the pressure"}, {"start": 50.519999999999996, "end": 52.84, "text": " that acts on these particles."}, {"start": 52.84, "end": 55.32, "text": " It is a particle-based simulation."}, {"start": 55.32, "end": 60.440000000000005, "text": " Here, a solution is proposed for the Eulerian approach, which means that we do not compute"}, {"start": 60.440000000000005, "end": 66.72, "text": " these quantities everywhere in the continuum of space, but we use a fine 3D grid and we compute"}, {"start": 66.72, "end": 69.64, "text": " these quantities only in these grid points."}, {"start": 69.64, "end": 71.88, "text": " No particles to be seen anywhere."}, {"start": 71.88, "end": 76.44, "text": " There are mathematical techniques to try to guess what happens between these individual"}, {"start": 76.44, "end": 80.68, "text": " grid points and this process is referred to as interpolation."}, {"start": 80.68, "end": 85.32000000000001, "text": " So normally, in this grid-based approach, if we wish to simulate such a buckling effect"}, {"start": 85.32000000000001, "end": 90.60000000000001, "text": " will be sorely disappointed, because what we will see is that the surface details rapidly"}, {"start": 90.60000000000001, "end": 93.96000000000001, "text": " disappear due to the inaccuracies in the simulation."}, {"start": 93.96000000000001, "end": 99.24000000000001, "text": " The reason for this is that the classical grid-based simulators utilize a technique that mathematicians"}, {"start": 99.24000000000001, "end": 101.64000000000001, "text": " like to call operator splitting."}, {"start": 101.64000000000001, "end": 106.84, "text": " This means that we solve these fluid equations by taking care of advection, pressure, and"}, {"start": 106.84, "end": 109.2, "text": " viscosity separately."}, {"start": 109.2, "end": 111.84, "text": " These are the required quantities, separate solutions."}, {"start": 111.84, "end": 117.0, "text": " This is great because it eases the computational complexity of the problem, however, we have"}, {"start": 117.0, "end": 121.56, "text": " to pay a price for it in the form of newly introduced inaccuracies."}, {"start": 121.56, "end": 126.72, "text": " For instance, some kinetic and shear forces are significantly dampened, which leads to"}, {"start": 126.72, "end": 130.88, "text": " a loss of detail for buckling effects with traditional techniques."}, {"start": 130.88, "end": 136.56, "text": " This paper introduces a new way of efficiently solving these operators together in a way"}, {"start": 136.56, "end": 139.96, "text": " that these coupling effects are retained in the simulation."}, {"start": 139.96, "end": 145.04, "text": " The final solution not only looks stable, but is mathematically proven to work well for"}, {"start": 145.04, "end": 149.92000000000002, "text": " a variety of cases, and it also takes into consideration collisions with other solid"}, {"start": 149.92000000000002, "end": 151.28, "text": " objects correctly."}, {"start": 151.28, "end": 155.92000000000002, "text": " I absolutely love this, and anyone who is in the middle of creating a new movie with"}, {"start": 155.92000000000002, "end": 160.28, "text": " some fluid action going on has to be all over this new technique."}, {"start": 160.28, "end": 163.0, "text": " And the paper is absolutely amazing."}, {"start": 163.0, "end": 169.52, "text": " It contains crystal clear writing, many paragraphs are so tight that I'd find it almost impossible"}, {"start": 169.52, "end": 175.28, "text": " to cut even one word from them, yet it is still digestible and absolutely beautifully"}, {"start": 175.28, "end": 176.28, "text": " written."}, {"start": 176.28, "end": 180.36, "text": " Make sure to have a look, as always, the link is available in the video description."}, {"start": 180.36, "end": 186.0, "text": " These amazing papers are stories that need to be told to everyone, not only to experts"}, {"start": 186.0, "end": 191.52, "text": " to everyone, and before creating these videos I always try my best to be in contact with"}, {"start": 191.52, "end": 193.28, "text": " the authors of these works."}, {"start": 193.28, "end": 197.60000000000002, "text": " And nowadays many of them are telling me that they were really surprised by the influx"}, {"start": 197.60000000000002, "end": 201.24, "text": " of views they got after they were showcased in the series."}, {"start": 201.24, "end": 206.12, "text": " Writing papers that are featured in two-minute papers takes a ridiculous amount of hard work,"}, {"start": 206.12, "end": 210.88, "text": " and after that the researchers make them available for everyone free of charge."}, {"start": 210.88, "end": 216.08, "text": " And now I am so glad to see them get more and more recognition for their hard work."}, {"start": 216.08, "end": 217.4, "text": " Absolutely amazing."}, {"start": 217.4, "end": 221.24, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=5vpklJw7uL0
Designing Decorative Joinery for Furniture | Two Minute Papers #157
The paper "Interactive Design and Stability Analysis of Decorative Joinery for Furniture" is available here: https://jiaxianyao.github.io/joinery/ Note: SketchUp is no longer owned by Google and is now called SketchUp 3D. Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoji Olaifa here. This paper is about designing and creating furniture with pieces that are geometrically interlocked. Such pieces not only have artistic value, but such structures can also enhance the integrity and sturdiness of a piece of furniture. This piece of work takes a simple 2D drawing of this interlocking structure and assembles the required pieces for us to build a 3D model from them. This drawing can be done with one of the most user friendly modeler program out there, Google Sketchup. This can be used even by novices. From these disassembled parts, it is highly non-trivial to create a 3D printable model. For instance, it is required that these pieces can be put together with one translational motion. Basically, all we need is one nudge to put two of these pieces together. If you ever had a new, really simple piece of furniture from IKEA, had a look at the final product at the shop and thought, well, I only have 10 minutes to put this thing together. But anyway, how hard can it be? And you know, three hours of curzing later, the damn thing is still not completely assembled. If you had any of those experiences before, this one push assembly condition is for you. And the algorithm automatically finds a sequence of motions that assembles our target 3D shape and because we only have 2D information from the input, it also has to decide how, and where to extrude, thicken, or subtract from these volumes. The third space of possible motions is immense. And we have to take into consideration that we don't even know if there is a possible solution for this puzzle at all. If this is the case, the algorithm finds out and proposes changes to the model that make the construction feasible. And if this wasn't enough, we can also put this digital furniture model into a virtual world where gravitational forces are simulated to see how stable the final result is. Here, the proposed yellow regions indicate that the stability of this table could be improved via small modifications. It is remarkable to see that a novice user who has never done a minute of 3D modeling can create such a beautiful and resilient piece of furniture. Really, really nice work, loving it. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Karoji Olaifa here."}, {"start": 4.28, "end": 10.68, "text": " This paper is about designing and creating furniture with pieces that are geometrically interlocked."}, {"start": 10.68, "end": 19.28, "text": " Such pieces not only have artistic value, but such structures can also enhance the integrity and sturdiness of a piece of furniture."}, {"start": 19.28, "end": 28.240000000000002, "text": " This piece of work takes a simple 2D drawing of this interlocking structure and assembles the required pieces for us to build a 3D model from them."}, {"start": 28.24, "end": 33.92, "text": " This drawing can be done with one of the most user friendly modeler program out there, Google Sketchup."}, {"start": 33.92, "end": 36.4, "text": " This can be used even by novices."}, {"start": 36.4, "end": 41.68, "text": " From these disassembled parts, it is highly non-trivial to create a 3D printable model."}, {"start": 41.68, "end": 47.36, "text": " For instance, it is required that these pieces can be put together with one translational motion."}, {"start": 47.36, "end": 52.0, "text": " Basically, all we need is one nudge to put two of these pieces together."}, {"start": 52.0, "end": 59.12, "text": " If you ever had a new, really simple piece of furniture from IKEA, had a look at the final product at the shop and thought,"}, {"start": 59.12, "end": 62.24, "text": " well, I only have 10 minutes to put this thing together."}, {"start": 62.24, "end": 64.24, "text": " But anyway, how hard can it be?"}, {"start": 64.24, "end": 69.36, "text": " And you know, three hours of curzing later, the damn thing is still not completely assembled."}, {"start": 69.36, "end": 74.4, "text": " If you had any of those experiences before, this one push assembly condition is for you."}, {"start": 74.4, "end": 80.64, "text": " And the algorithm automatically finds a sequence of motions that assembles our target 3D shape"}, {"start": 80.64, "end": 85.76, "text": " and because we only have 2D information from the input, it also has to decide how,"}, {"start": 85.76, "end": 89.92, "text": " and where to extrude, thicken, or subtract from these volumes."}, {"start": 89.92, "end": 93.12, "text": " The third space of possible motions is immense."}, {"start": 93.12, "end": 99.12, "text": " And we have to take into consideration that we don't even know if there is a possible solution for this puzzle at all."}, {"start": 99.12, "end": 105.76, "text": " If this is the case, the algorithm finds out and proposes changes to the model that make the construction feasible."}, {"start": 105.76, "end": 111.28, "text": " And if this wasn't enough, we can also put this digital furniture model into a virtual world"}, {"start": 111.28, "end": 116.72, "text": " where gravitational forces are simulated to see how stable the final result is."}, {"start": 116.72, "end": 123.76, "text": " Here, the proposed yellow regions indicate that the stability of this table could be improved via small modifications."}, {"start": 123.76, "end": 132.96, "text": " It is remarkable to see that a novice user who has never done a minute of 3D modeling can create such a beautiful and resilient piece of furniture."}, {"start": 132.96, "end": 135.6, "text": " Really, really nice work, loving it."}, {"start": 135.6, "end": 139.76, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=jDxsGW5KUP0
Self-Illuminating Explosions | Two Minute Papers #156
The paper "Lighting Grid Hierarchy for Self-illuminating Explosions" is available here: http://www.cemyuksel.com/research/lgh/ Rendering course at the Technical University of Vienna: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi Our light transport-related episodes: https://www.youtube.com/playlist?list=PLujxSBD-JXgk1hb8lyu6sTYsLL39r_3bG Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ If you don't mind, make sure to send us a picture of yourself with a piece of merch! WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2262295/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kato Ejone Fahir. Today, we are going to talk about explosions. To be precise, imagine that we already have the physics simulation data for an explosion on our computer, but we would like to visualize it on our screen. This requires a light simulation program that is able to create an image of this virtual scene that looks exactly the same as it would in reality. We have had plenty of earlier episodes on light transport, and as you know all too well, it is one of my favorite topics. I just can't get enough of it. I've put a link to these related episodes in the video description. If we wish to render a huge smoke plume, we perform something that computer graphics people call volumetric light transport. This means that a ray of light doesn't necessarily bounce off of the surface of materials, but it can penetrate their surfaces and scatter around inside of them. A technique that can deal with this is called volumetric path tracing, and if we wish to create an image of an explosion using that, well, better pack some fast food because it is likely going to take several hours. The explosion in this image took 13 hours, and it is still not rendered perfectly. But this technique is able to solve this problem in 20 minutes, which is almost 40 times quicker. Unbelievable. The key idea is that this super complicated volumetric explosion data can be reimagined as a large batch of point light sources. If we solve this light transport problem between these point light sources, we get a solution that is remarkably similar to the original solution with path tracing. However, solving this new representation is much simpler. But that's only the first step. If we have a bunch of light sources, we can create a grid structure around them, and in these grid points, we can compute shadows and illumination in a highly efficient manner. What's more, we can create multiple of these grid representations. They all work on the very same data, but some of them are finer, and some of them are significantly sparser, more coarse. Another smart observation here is that even though sharp, high frequency illumination details need to be computed on this fine grid, which takes quite a bit of computation time, it is sufficient to solve the coarse, low frequency details on one of these sparser grids. The results look indistinguishable from the ground truth solutions, but the overall computation time is significantly reduced. The paper contains detailed comparisons against other techniques as well. Most of these scenes are rendered using hundreds of thousands of these point light sources, and as you can see, the results are unbelievable. If you would like to learn even more about light transport, I am holding a master level course on this at the Vienna University of Technology in Austria. I thought that the teachings should not only be available for those 30 people who sit in the room who can afford a university education. It should be available for everyone. So we made the entirety of the lecture available for everyone, free of charge, and I am so glad to see that thousands of people have watched it, and to this day I get many messages that they enjoyed it, and now they see the world differently. It was recorded live with the students in the room, and it doesn't have the audio quality of two minute papers. However, what it does well is it conjures up the atmosphere of these lectures, and you can almost feel like one of the students sitting there. If you are interested, have a look. The link is available in the video description, and make sure to read this paper too. It's incredible. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Ejone Fahir."}, {"start": 4.36, "end": 7.5600000000000005, "text": " Today, we are going to talk about explosions."}, {"start": 7.5600000000000005, "end": 12.76, "text": " To be precise, imagine that we already have the physics simulation data for an explosion"}, {"start": 12.76, "end": 16.080000000000002, "text": " on our computer, but we would like to visualize it on our screen."}, {"start": 16.080000000000002, "end": 21.080000000000002, "text": " This requires a light simulation program that is able to create an image of this virtual"}, {"start": 21.080000000000002, "end": 24.96, "text": " scene that looks exactly the same as it would in reality."}, {"start": 24.96, "end": 29.84, "text": " We have had plenty of earlier episodes on light transport, and as you know all too well,"}, {"start": 29.84, "end": 32.08, "text": " it is one of my favorite topics."}, {"start": 32.08, "end": 33.64, "text": " I just can't get enough of it."}, {"start": 33.64, "end": 37.16, "text": " I've put a link to these related episodes in the video description."}, {"start": 37.16, "end": 41.64, "text": " If we wish to render a huge smoke plume, we perform something that computer graphics"}, {"start": 41.64, "end": 44.6, "text": " people call volumetric light transport."}, {"start": 44.6, "end": 49.64, "text": " This means that a ray of light doesn't necessarily bounce off of the surface of materials,"}, {"start": 49.64, "end": 53.760000000000005, "text": " but it can penetrate their surfaces and scatter around inside of them."}, {"start": 53.760000000000005, "end": 58.56, "text": " A technique that can deal with this is called volumetric path tracing, and if we wish to"}, {"start": 58.56, "end": 63.96, "text": " create an image of an explosion using that, well, better pack some fast food because it"}, {"start": 63.96, "end": 66.56, "text": " is likely going to take several hours."}, {"start": 66.56, "end": 71.92, "text": " The explosion in this image took 13 hours, and it is still not rendered perfectly."}, {"start": 71.92, "end": 78.28, "text": " But this technique is able to solve this problem in 20 minutes, which is almost 40 times quicker."}, {"start": 78.28, "end": 79.28, "text": " Unbelievable."}, {"start": 79.28, "end": 85.48, "text": " The key idea is that this super complicated volumetric explosion data can be reimagined as a large"}, {"start": 85.48, "end": 87.52000000000001, "text": " batch of point light sources."}, {"start": 87.52, "end": 92.0, "text": " If we solve this light transport problem between these point light sources, we get a solution"}, {"start": 92.0, "end": 96.19999999999999, "text": " that is remarkably similar to the original solution with path tracing."}, {"start": 96.19999999999999, "end": 100.16, "text": " However, solving this new representation is much simpler."}, {"start": 100.16, "end": 101.84, "text": " But that's only the first step."}, {"start": 101.84, "end": 105.92, "text": " If we have a bunch of light sources, we can create a grid structure around them, and in"}, {"start": 105.92, "end": 111.08, "text": " these grid points, we can compute shadows and illumination in a highly efficient manner."}, {"start": 111.08, "end": 115.08, "text": " What's more, we can create multiple of these grid representations."}, {"start": 115.08, "end": 119.92, "text": " They all work on the very same data, but some of them are finer, and some of them are"}, {"start": 119.92, "end": 123.12, "text": " significantly sparser, more coarse."}, {"start": 123.12, "end": 128.12, "text": " Another smart observation here is that even though sharp, high frequency illumination details"}, {"start": 128.12, "end": 133.16, "text": " need to be computed on this fine grid, which takes quite a bit of computation time, it"}, {"start": 133.16, "end": 138.68, "text": " is sufficient to solve the coarse, low frequency details on one of these sparser grids."}, {"start": 138.68, "end": 143.88, "text": " The results look indistinguishable from the ground truth solutions, but the overall computation"}, {"start": 143.88, "end": 146.2, "text": " time is significantly reduced."}, {"start": 146.2, "end": 150.35999999999999, "text": " The paper contains detailed comparisons against other techniques as well."}, {"start": 150.35999999999999, "end": 154.79999999999998, "text": " Most of these scenes are rendered using hundreds of thousands of these point light sources,"}, {"start": 154.79999999999998, "end": 158.0, "text": " and as you can see, the results are unbelievable."}, {"start": 158.0, "end": 162.2, "text": " If you would like to learn even more about light transport, I am holding a master level"}, {"start": 162.2, "end": 166.2, "text": " course on this at the Vienna University of Technology in Austria."}, {"start": 166.2, "end": 170.8, "text": " I thought that the teachings should not only be available for those 30 people who sit"}, {"start": 170.8, "end": 174.04000000000002, "text": " in the room who can afford a university education."}, {"start": 174.04000000000002, "end": 176.16000000000003, "text": " It should be available for everyone."}, {"start": 176.16000000000003, "end": 181.48000000000002, "text": " So we made the entirety of the lecture available for everyone, free of charge, and I am so glad"}, {"start": 181.48000000000002, "end": 186.48000000000002, "text": " to see that thousands of people have watched it, and to this day I get many messages that"}, {"start": 186.48000000000002, "end": 189.92000000000002, "text": " they enjoyed it, and now they see the world differently."}, {"start": 189.92000000000002, "end": 194.44, "text": " It was recorded live with the students in the room, and it doesn't have the audio quality"}, {"start": 194.44, "end": 195.8, "text": " of two minute papers."}, {"start": 195.8, "end": 200.88000000000002, "text": " However, what it does well is it conjures up the atmosphere of these lectures, and you can"}, {"start": 200.88000000000002, "end": 203.52, "text": " almost feel like one of the students sitting there."}, {"start": 203.52, "end": 205.12, "text": " If you are interested, have a look."}, {"start": 205.12, "end": 209.20000000000002, "text": " The link is available in the video description, and make sure to read this paper too."}, {"start": 209.20000000000002, "end": 210.36, "text": " It's incredible."}, {"start": 210.36, "end": 231.12, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ugdciqeOPeM
Simulating Liquid-Hair Interactions | Two Minute Papers #155
Our Patreon page: https://www.patreon.com/TwoMinutePapers The paper "A Multi-Scale Model for Simulating Liquid-Hair Interactions", and its source code is available here: http://www.cs.columbia.edu/cg/liquidhair/ Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-697927/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. We already know that by using a computer we can simulate fluids and we can simulate hair. But what about simulating both of them at the same time? This paper is about liquid hair interaction and simulating the dynamics of wet hair. Our season Fellow Scholars immediately know that this episode is going to be ample in amazing slow motion footage. I hope I didn't mess up with any of them, you will see soon if this is the case or not. Before we start talking about it, I'd like to note the following remarkable features. The authors uploaded a supplementary video in 4K resolution, executable files for their technique for all three major operating systems, data assets, and they also freshly revealed the full source code of the project. Pellia, I feel in heaven, a big two-minute paper style head tip to the authors for this premium quality presentation. If this paper were a car, it would definitely be a Maserati or a Mercedes. This technique solves the equations for liquid motion along every single hair strand computes the cohesion effects between the hairs and it can also simulate the effect of water dripping off the hair. Feast your eyes on these absolutely incredible results. The main issue with such an idea is that the theory of large and small scale simulations are inherently different and in this case we need both. The large scale simulator would be a standard program that is able to compute how the velocity and pressure of the liquid evolves in time. However, we also wish to model the water droplets contained within one tiny hair strand. With a large scale simulator, this would take a steepenously large amount of time and resources so the key observation is that a small scale fluid simulator program would be introduced to take care of this. However, these two simulators cannot simply coexist without side effects. As they are two separate programs that work on the very same scene, we have to make sure that as we pass different quantities between them, they will still remain intact. This means that a drop of water that gets trapped in a hair strand has to disappear from the large scale simulator and has to be re-edited to it when it rips out. This is a remarkably challenging problem. But with this, we only scratch the surface. Make sure to have a look at the paper that has so much more to offer, it is impossible to even enumerate the list of contributions within in such a short video. The quality of this paper simply left me speechless and I would encourage you to take a look as well. And while this amazing footage is rolling, I would like to let you know that two minute papers can exist because of your support through Patreon. Supporters of the series gain really cool perks like watching every single one of these episodes in Early Access. I am super happy to see how many of you decided to support the series and in return, we are able to create better and better videos for you. Thank you again, you fellow scholars are the most amazing audience. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.4, "end": 10.24, "text": " We already know that by using a computer we can simulate fluids and we can simulate hair."}, {"start": 10.24, "end": 14.0, "text": " But what about simulating both of them at the same time?"}, {"start": 14.0, "end": 19.76, "text": " This paper is about liquid hair interaction and simulating the dynamics of wet hair."}, {"start": 19.76, "end": 26.76, "text": " Our season Fellow Scholars immediately know that this episode is going to be ample in amazing slow motion footage."}, {"start": 26.76, "end": 31.720000000000002, "text": " I hope I didn't mess up with any of them, you will see soon if this is the case or not."}, {"start": 31.720000000000002, "end": 36.28, "text": " Before we start talking about it, I'd like to note the following remarkable features."}, {"start": 36.28, "end": 40.84, "text": " The authors uploaded a supplementary video in 4K resolution,"}, {"start": 40.84, "end": 45.56, "text": " executable files for their technique for all three major operating systems,"}, {"start": 45.56, "end": 50.6, "text": " data assets, and they also freshly revealed the full source code of the project."}, {"start": 50.6, "end": 57.24, "text": " Pellia, I feel in heaven, a big two-minute paper style head tip to the authors for this premium"}, {"start": 57.24, "end": 63.72, "text": " quality presentation. If this paper were a car, it would definitely be a Maserati or a Mercedes."}, {"start": 63.72, "end": 69.08, "text": " This technique solves the equations for liquid motion along every single hair strand"}, {"start": 69.08, "end": 74.6, "text": " computes the cohesion effects between the hairs and it can also simulate the effect of water"}, {"start": 74.6, "end": 79.88, "text": " dripping off the hair. Feast your eyes on these absolutely incredible results."}, {"start": 79.88, "end": 85.56, "text": " The main issue with such an idea is that the theory of large and small scale simulations are"}, {"start": 85.56, "end": 91.32, "text": " inherently different and in this case we need both. The large scale simulator would be a standard"}, {"start": 91.32, "end": 96.91999999999999, "text": " program that is able to compute how the velocity and pressure of the liquid evolves in time."}, {"start": 96.91999999999999, "end": 102.84, "text": " However, we also wish to model the water droplets contained within one tiny hair strand."}, {"start": 102.84, "end": 108.19999999999999, "text": " With a large scale simulator, this would take a steepenously large amount of time and resources"}, {"start": 108.2, "end": 113.72, "text": " so the key observation is that a small scale fluid simulator program would be introduced to take"}, {"start": 113.72, "end": 119.96000000000001, "text": " care of this. However, these two simulators cannot simply coexist without side effects."}, {"start": 119.96000000000001, "end": 125.4, "text": " As they are two separate programs that work on the very same scene, we have to make sure that as"}, {"start": 125.4, "end": 131.0, "text": " we pass different quantities between them, they will still remain intact. This means that a drop of"}, {"start": 131.0, "end": 136.6, "text": " water that gets trapped in a hair strand has to disappear from the large scale simulator and has"}, {"start": 136.6, "end": 142.12, "text": " to be re-edited to it when it rips out. This is a remarkably challenging problem. But with this,"}, {"start": 142.12, "end": 147.48, "text": " we only scratch the surface. Make sure to have a look at the paper that has so much more to offer,"}, {"start": 147.48, "end": 152.76, "text": " it is impossible to even enumerate the list of contributions within in such a short video."}, {"start": 152.76, "end": 158.44, "text": " The quality of this paper simply left me speechless and I would encourage you to take a look as well."}, {"start": 158.44, "end": 162.68, "text": " And while this amazing footage is rolling, I would like to let you know that two minute"}, {"start": 162.68, "end": 168.36, "text": " papers can exist because of your support through Patreon. Supporters of the series gain really"}, {"start": 168.36, "end": 174.36, "text": " cool perks like watching every single one of these episodes in Early Access. I am super happy to"}, {"start": 174.36, "end": 179.72, "text": " see how many of you decided to support the series and in return, we are able to create better and"}, {"start": 179.72, "end": 185.08, "text": " better videos for you. Thank you again, you fellow scholars are the most amazing audience."}, {"start": 185.08, "end": 199.24, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wlndIQHtiFw
Real-Time Character Control With Phase-Functioned Neural Networks | Two Minute Papers #154
The paper "Phase-Functioned Neural Networks for Character Control" is available here: http://theorangeduck.com/page/phase-functioned-neural-networks-character-control Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1835354/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. In this piece of work, we seek to control digital characters in real time. It happens the following way. We specify a target trajectory, and the algorithm has to synthesize a series of motions that follows that path. To make these motions as realistic as possible, this is typically accomplished by unleashing a learning algorithm on a large database that contains a ton of motion information. Previous techniques did not have a good understanding of these databases, and they often synthesized motions from pieces that corresponded to different kinds of movements. This lack of understanding results in stiff and natural output motion. Intuitively, it is a bit like putting together a sentence from a set of letters that were cut out one by one from different newspaper articles. It is a fully formed sentence, but it lacks the smoothness and the flow of a properly aligned piece of text. This is a neural network-based technique that introduces a face function to the learning process. This face function augments the learning with the timing information of a given motion. With this face function, the neural network recognizes that we are not only learning periodic motions, but it knows when these motions start and when they end. The final technique takes very little memory, runs in real time, and it accomplishes smooth walking, running, jumping, and climbing motions, and so much more over a variety of terrains with flying colors. In a previous episode, we have discussed a different technique that accomplished something similar with a low and high level controller. One of the major selling points of this technique is that this one offers a unified solution for terrain traversal with using only one neural network. This has the potential to make it really big on computer games and real-time animation. It is absolutely amazing to witness this and be a part of the future. Make sure to have a look at the paper, which also contains the details of a terrain-fitting step to make this learning algorithm capable of taking into consideration a variety of obstacles. I would also like to thank Claudio Panacci for his amazing work in translating so many of these episodes to Italian. This makes two-minute paper successful for more people around the globe and the more people we can reach the happier I am. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.64, "end": 9.6, "text": " In this piece of work, we seek to control digital characters in real time."}, {"start": 9.6, "end": 11.36, "text": " It happens the following way."}, {"start": 11.36, "end": 17.06, "text": " We specify a target trajectory, and the algorithm has to synthesize a series of motions that"}, {"start": 17.06, "end": 18.48, "text": " follows that path."}, {"start": 18.48, "end": 23.64, "text": " To make these motions as realistic as possible, this is typically accomplished by unleashing"}, {"start": 23.64, "end": 29.76, "text": " a learning algorithm on a large database that contains a ton of motion information."}, {"start": 29.76, "end": 34.72, "text": " Previous techniques did not have a good understanding of these databases, and they often synthesized"}, {"start": 34.72, "end": 39.68, "text": " motions from pieces that corresponded to different kinds of movements."}, {"start": 39.68, "end": 44.2, "text": " This lack of understanding results in stiff and natural output motion."}, {"start": 44.2, "end": 49.3, "text": " Intuitively, it is a bit like putting together a sentence from a set of letters that were"}, {"start": 49.3, "end": 53.56, "text": " cut out one by one from different newspaper articles."}, {"start": 53.56, "end": 58.68000000000001, "text": " It is a fully formed sentence, but it lacks the smoothness and the flow of a properly"}, {"start": 58.68, "end": 60.64, "text": " aligned piece of text."}, {"start": 60.64, "end": 65.16, "text": " This is a neural network-based technique that introduces a face function to the learning"}, {"start": 65.16, "end": 66.28, "text": " process."}, {"start": 66.28, "end": 71.08, "text": " This face function augments the learning with the timing information of a given motion."}, {"start": 71.08, "end": 76.12, "text": " With this face function, the neural network recognizes that we are not only learning periodic"}, {"start": 76.12, "end": 81.08, "text": " motions, but it knows when these motions start and when they end."}, {"start": 81.08, "end": 86.6, "text": " The final technique takes very little memory, runs in real time, and it accomplishes smooth"}, {"start": 86.6, "end": 94.16, "text": " walking, running, jumping, and climbing motions, and so much more over a variety of terrains"}, {"start": 94.16, "end": 96.11999999999999, "text": " with flying colors."}, {"start": 96.11999999999999, "end": 100.44, "text": " In a previous episode, we have discussed a different technique that accomplished something"}, {"start": 100.44, "end": 103.91999999999999, "text": " similar with a low and high level controller."}, {"start": 103.91999999999999, "end": 109.03999999999999, "text": " One of the major selling points of this technique is that this one offers a unified solution"}, {"start": 109.03999999999999, "end": 113.52, "text": " for terrain traversal with using only one neural network."}, {"start": 113.52, "end": 118.6, "text": " This has the potential to make it really big on computer games and real-time animation."}, {"start": 118.6, "end": 123.19999999999999, "text": " It is absolutely amazing to witness this and be a part of the future."}, {"start": 123.19999999999999, "end": 127.47999999999999, "text": " Make sure to have a look at the paper, which also contains the details of a terrain-fitting"}, {"start": 127.47999999999999, "end": 133.8, "text": " step to make this learning algorithm capable of taking into consideration a variety of obstacles."}, {"start": 133.8, "end": 139.07999999999998, "text": " I would also like to thank Claudio Panacci for his amazing work in translating so many"}, {"start": 139.07999999999998, "end": 141.2, "text": " of these episodes to Italian."}, {"start": 141.2, "end": 145.56, "text": " This makes two-minute paper successful for more people around the globe and the more people"}, {"start": 145.56, "end": 147.6, "text": " we can reach the happier I am."}, {"start": 147.6, "end": 168.72, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2vnLBb18MuQ
Digital Creatures Learn to Navigate in 3D | Two Minute Papers #153
The paper "DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning" is available here: http://www.cs.ubc.ca/~van/papers/2017-TOG-deepLoco/index.html Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1505714/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifeher. Earlier, we have talked about a few amazing algorithms to teach digital creatures to walk. And this time, we are interested in controlling the joints of a digital character to not only walk properly, but take into consideration its surroundings. This new version can navigate in 3D with static and dynamic moving obstacles or even dribble a ball toward a target. Loving the execution and the production value of this paper. This is accomplished by an efficient system that consists of two controllers that are represented by learning algorithms. One, the low-level controller is about maintaining balance and proper limb control by manipulating the joint positions and velocities appropriately. This controller operates on a fine time scale, 30 times per second, and is trained via a four-layer neural network. Two, the high-level controller can accomplish bigger overarching goals, such as following a path or avoiding static and dynamic obstacles. We don't need to run this so often, therefore, to save resources, this controller operates on a coarse time scale only twice each second and is trained via a deep convolutional neural network. It also has support for a small degree of transfer learning. Transfer learning means that after successfully learning to solve a problem, we don't have to start from scratch for the next one, but we can reuse some of that valuable knowledge and get a head start. This is a heavily researched area and is likely going to be one of the major next frontiers in machine learning research. Now, make no mistake, it is not like transfer learning is suddenly to be considered a solved problem, but in this particular case, it is finally a possibility. Really cool. I hope this brief expose fired you up too. This paper is a bomb, make sure to have a look, as always, the link is available in the video description. And by the way, with your support on Patreon, we will soon be able to spend part of our budget on empowering research projects. How amazing is that? The new two-minute paper shirts are also flying off the shelves. Happy to hear you're enjoying them so much. If you're interested, hit up two-minute papers.com if you are located in the US. The EU and Worldwide stores link is available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifeher."}, {"start": 4.36, "end": 10.3, "text": " Earlier, we have talked about a few amazing algorithms to teach digital creatures to walk."}, {"start": 10.3, "end": 14.84, "text": " And this time, we are interested in controlling the joints of a digital character"}, {"start": 14.84, "end": 19.44, "text": " to not only walk properly, but take into consideration its surroundings."}, {"start": 19.44, "end": 24.84, "text": " This new version can navigate in 3D with static and dynamic moving obstacles"}, {"start": 24.84, "end": 27.76, "text": " or even dribble a ball toward a target."}, {"start": 27.76, "end": 32.08, "text": " Loving the execution and the production value of this paper."}, {"start": 32.08, "end": 36.24, "text": " This is accomplished by an efficient system that consists of two controllers"}, {"start": 36.24, "end": 38.800000000000004, "text": " that are represented by learning algorithms."}, {"start": 38.800000000000004, "end": 44.16, "text": " One, the low-level controller is about maintaining balance and proper limb control"}, {"start": 44.16, "end": 48.0, "text": " by manipulating the joint positions and velocities appropriately."}, {"start": 48.0, "end": 52.480000000000004, "text": " This controller operates on a fine time scale, 30 times per second,"}, {"start": 52.480000000000004, "end": 55.68000000000001, "text": " and is trained via a four-layer neural network."}, {"start": 55.68, "end": 60.16, "text": " Two, the high-level controller can accomplish bigger overarching goals,"}, {"start": 60.16, "end": 64.72, "text": " such as following a path or avoiding static and dynamic obstacles."}, {"start": 64.72, "end": 68.48, "text": " We don't need to run this so often, therefore, to save resources,"}, {"start": 68.48, "end": 73.52, "text": " this controller operates on a coarse time scale only twice each second"}, {"start": 73.52, "end": 77.12, "text": " and is trained via a deep convolutional neural network."}, {"start": 77.12, "end": 80.72, "text": " It also has support for a small degree of transfer learning."}, {"start": 80.72, "end": 85.03999999999999, "text": " Transfer learning means that after successfully learning to solve a problem,"}, {"start": 85.04, "end": 87.68, "text": " we don't have to start from scratch for the next one,"}, {"start": 87.68, "end": 92.0, "text": " but we can reuse some of that valuable knowledge and get a head start."}, {"start": 92.0, "end": 97.04, "text": " This is a heavily researched area and is likely going to be one of the major next frontiers"}, {"start": 97.04, "end": 98.80000000000001, "text": " in machine learning research."}, {"start": 98.80000000000001, "end": 104.4, "text": " Now, make no mistake, it is not like transfer learning is suddenly to be considered a solved problem,"}, {"start": 104.4, "end": 108.08000000000001, "text": " but in this particular case, it is finally a possibility."}, {"start": 108.08000000000001, "end": 108.88000000000001, "text": " Really cool."}, {"start": 108.88000000000001, "end": 111.68, "text": " I hope this brief expose fired you up too."}, {"start": 111.68, "end": 117.44000000000001, "text": " This paper is a bomb, make sure to have a look, as always, the link is available in the video description."}, {"start": 117.44000000000001, "end": 122.72000000000001, "text": " And by the way, with your support on Patreon, we will soon be able to spend part of our budget"}, {"start": 122.72000000000001, "end": 124.72000000000001, "text": " on empowering research projects."}, {"start": 124.72000000000001, "end": 126.64000000000001, "text": " How amazing is that?"}, {"start": 126.64000000000001, "end": 130.72, "text": " The new two-minute paper shirts are also flying off the shelves."}, {"start": 130.72, "end": 132.96, "text": " Happy to hear you're enjoying them so much."}, {"start": 132.96, "end": 137.76000000000002, "text": " If you're interested, hit up two-minute papers.com if you are located in the US."}, {"start": 137.76, "end": 142.16, "text": " The EU and Worldwide stores link is available in the video description."}, {"start": 142.16, "end": 170.32, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=D4C1dB9UheQ
AI Learns to Synthesize Pictures of Animals | Two Minute Papers #152
Our Patreon page is available here. Thanks so much for your generous support! https://www.patreon.com/TwoMinutePapers The paper "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks" and its source code is available here: https://junyanz.github.io/CycleGAN/ Our earlier episodes on regularization: https://www.youtube.com/watch?v=6aF9sJrzxaM https://www.youtube.com/watch?v=HTUxsrO-P_8 Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2042765/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karo Ejona Ifeher. I just finished reading this paper and I fell out of the chair. And I can almost guarantee you that the results in this work are so insane, you will have to double or even triple check to believe what you're going to see here. This one is about image translation, which means that the input is an image and the output is a different version of this input image that is changed according to our guidelines. Imagine that we have a Monet painting and we'd like to create a photograph of this beautiful view. There we go. What if we'd like to change this winter landscape to an image created during the summer? There we go. If we are one of those people on the internet forums who just love to compare apples to oranges, this is now also a possibility. And have a look at this. Imagine that we like the background of this image, but instead of the zebras, we would like to have a couple of horses. No problem coming right up. This algorithm synthesizes them from scratch. The first important thing we should know about this technique is that it uses generative adversarial networks. This means that we have two neural networks battling each other in an arms race. The generator network tries to create more and more realistic images and these are passed to the discriminator network which tries to learn the difference between real photographs and fake forged images. During this process, the two neural networks learn and improve together until they become experts at their own craft. However, this piece of work introduces two novel additions to this process. One, in earlier works, the training samples were typically paired. This means that the photograph of a shoe would be paired to a drawing that depicts it. This additional information helps the training process a great deal and the algorithm would be able to map drawings to photographs. However, a key difference here is that without such pairings, we don't need these labels. We can use significantly more training samples in our data sets, which also helps the learning process. If this is executed well, the technique is able to pair anything to anything else, which results in a remarkably powerful algorithm. Key difference number two. A cycle consistency loss function is introduced to the optimization problem. This means that if we convert a summer image to a winter image and then back to a summer image, we should get the very same input image back. If our learning system obeys to this principle, the output quality of the translation is going to be significantly better. This cycle consistency loss is introduced as a regularization term. Our season fellow scholars already know what it means, but in case you don't, I've put a link to our explanation in the video description. The paper contains a ton more results and fortunately, the source code for this project is also available. Multiple implementations in fact. Just as a side note, which is draw dropping by the way, there's some rudimentary support for video. Amazing piece of work. Bravo! Now you can also see that the rate of progress in machine learning research is completely out of this world. No doubt that it is the best time to be a research scientist it's ever been. If you've liked this episode, make sure to subscribe to the series and have a look at our Patreon page where you can pick up cool perks like watching every single one of these episodes in early access. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two minute papers with Karo Ejona Ifeher."}, {"start": 4.4, "end": 8.56, "text": " I just finished reading this paper and I fell out of the chair."}, {"start": 8.56, "end": 13.040000000000001, "text": " And I can almost guarantee you that the results in this work are so insane,"}, {"start": 13.040000000000001, "end": 18.0, "text": " you will have to double or even triple check to believe what you're going to see here."}, {"start": 18.0, "end": 22.96, "text": " This one is about image translation, which means that the input is an image"}, {"start": 22.96, "end": 28.72, "text": " and the output is a different version of this input image that is changed according to our guidelines."}, {"start": 28.72, "end": 34.8, "text": " Imagine that we have a Monet painting and we'd like to create a photograph of this beautiful view."}, {"start": 34.8, "end": 41.76, "text": " There we go. What if we'd like to change this winter landscape to an image created during the summer?"}, {"start": 41.76, "end": 47.2, "text": " There we go. If we are one of those people on the internet forums who just love to compare"}, {"start": 47.2, "end": 50.64, "text": " apples to oranges, this is now also a possibility."}, {"start": 51.599999999999994, "end": 55.84, "text": " And have a look at this. Imagine that we like the background of this image,"}, {"start": 55.84, "end": 60.080000000000005, "text": " but instead of the zebras, we would like to have a couple of horses."}, {"start": 60.080000000000005, "end": 64.96000000000001, "text": " No problem coming right up. This algorithm synthesizes them from scratch."}, {"start": 64.96000000000001, "end": 68.96000000000001, "text": " The first important thing we should know about this technique is that it uses"}, {"start": 68.96000000000001, "end": 73.76, "text": " generative adversarial networks. This means that we have two neural networks"}, {"start": 73.76, "end": 79.04, "text": " battling each other in an arms race. The generator network tries to create more and more"}, {"start": 79.04, "end": 84.56, "text": " realistic images and these are passed to the discriminator network which tries to learn"}, {"start": 84.56, "end": 90.4, "text": " the difference between real photographs and fake forged images. During this process,"}, {"start": 90.4, "end": 96.64, "text": " the two neural networks learn and improve together until they become experts at their own craft."}, {"start": 96.64, "end": 101.44, "text": " However, this piece of work introduces two novel additions to this process."}, {"start": 101.44, "end": 106.96000000000001, "text": " One, in earlier works, the training samples were typically paired. This means that the photograph"}, {"start": 106.96000000000001, "end": 112.32000000000001, "text": " of a shoe would be paired to a drawing that depicts it. This additional information helps the"}, {"start": 112.32, "end": 118.0, "text": " training process a great deal and the algorithm would be able to map drawings to photographs."}, {"start": 118.0, "end": 123.19999999999999, "text": " However, a key difference here is that without such pairings, we don't need these labels."}, {"start": 123.19999999999999, "end": 129.12, "text": " We can use significantly more training samples in our data sets, which also helps the learning process."}, {"start": 129.12, "end": 134.16, "text": " If this is executed well, the technique is able to pair anything to anything else,"}, {"start": 134.16, "end": 141.04, "text": " which results in a remarkably powerful algorithm. Key difference number two. A cycle consistency"}, {"start": 141.04, "end": 146.48, "text": " loss function is introduced to the optimization problem. This means that if we convert a summer"}, {"start": 146.48, "end": 152.39999999999998, "text": " image to a winter image and then back to a summer image, we should get the very same input image back."}, {"start": 152.39999999999998, "end": 157.92, "text": " If our learning system obeys to this principle, the output quality of the translation is going to be"}, {"start": 157.92, "end": 163.68, "text": " significantly better. This cycle consistency loss is introduced as a regularization term."}, {"start": 163.68, "end": 167.92, "text": " Our season fellow scholars already know what it means, but in case you don't,"}, {"start": 167.92, "end": 173.11999999999998, "text": " I've put a link to our explanation in the video description. The paper contains a ton more"}, {"start": 173.11999999999998, "end": 179.27999999999997, "text": " results and fortunately, the source code for this project is also available. Multiple implementations"}, {"start": 179.27999999999997, "end": 184.95999999999998, "text": " in fact. Just as a side note, which is draw dropping by the way, there's some rudimentary support"}, {"start": 184.95999999999998, "end": 192.23999999999998, "text": " for video. Amazing piece of work. Bravo! Now you can also see that the rate of progress in"}, {"start": 192.23999999999998, "end": 197.67999999999998, "text": " machine learning research is completely out of this world. No doubt that it is the best time to"}, {"start": 197.68, "end": 202.8, "text": " be a research scientist it's ever been. If you've liked this episode, make sure to subscribe to"}, {"start": 202.8, "end": 207.84, "text": " the series and have a look at our Patreon page where you can pick up cool perks like watching"}, {"start": 207.84, "end": 212.8, "text": " every single one of these episodes in early access. Thanks for watching and for your generous"}, {"start": 212.8, "end": 230.48000000000002, "text": " support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=oleylS5XGpg
An Efficient Scattering Material Representation | Two Minute Papers #151
The paper "Downsampling Scattering Parameters for Rendering Anisotropic Media" and its source code is available here: https://shuangz.com/projects/multires-sa16/ Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1747666/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karo Ejona Ifehir. Have a look at these beautiful images. Representing these materials that you see here takes more than 25GB of storage. You could store several feature length movies on your hard drive using the same amount of storage. And this technique is able to compress all these 25GB into 45Mb without introducing any significant perceptible difference. That is close to a whopping 500 times more efficient representation. Hmm... This improved representation not only helps easing the storage requirements of these assets, but it also makes the rendering times, or in other words, the process of creating these images via light simulation programs typically more than twice as fast to process. That is a ton of money and time saved for the artists. An important keyword in this piece of work is anisotropic scattering. So what does this mean exactly? The scattering part means that we have to imagine these materials not as a surface, but as a volume in which rays of light bounce around and get absorbed. If we render a piece of cloth made of velvet, twill, or a similar material, there are lots of microscopic differences in the surface, so much so that it is insufficient to treat them as a solid surface, such as wood or metals. We have to think about them as volumes. This is the scattering part. The anisotropy means that light can scatter unevenly in this medium. These rays don't bounce around in all directions with equal probability. This means that there is significant forward and backward scattering in this media, making it even more difficult to create more optimized algorithms that simplify these scattering equations. If you look below here, you'll see these colorful images that researchers like to call different images. It basically means that we create one image with the perfectly accurate technique as a reference. As expected, this reference image probably takes forever to compute, but is important to have as a yardstick. Then, we compute one image with the proposed technique that is usually significantly faster. So we have these two images, and sometimes the differences are so difficult to see, we have no way of knowing where the inaccuracies are. So what we do is subtract the two images from each other and assign a color coding for the differences. As the error may be spatially varying, this is super useful because we can recognize exactly where the information is lost. The angrier the colors are, the higher the error is in a given region. As you can see, the proposed technique is significantly more accurate in representing this medium than a naive method using the same amount of storage. This paper is extraordinarily well written. It is one of the finest pieces of craftsmanship I've come along in a long while, and yes, it is a crime not having a look at it. Also, if you like this episode, make sure to subscribe to the series and check out our other videos. We have more than 150 episodes for you ready to go right now. You'll love it, and there will be lots of fun to be had. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Ejona Ifehir."}, {"start": 4.6000000000000005, "end": 6.84, "text": " Have a look at these beautiful images."}, {"start": 6.84, "end": 12.92, "text": " Representing these materials that you see here takes more than 25GB of storage."}, {"start": 12.92, "end": 18.52, "text": " You could store several feature length movies on your hard drive using the same amount of storage."}, {"start": 18.52, "end": 28.12, "text": " And this technique is able to compress all these 25GB into 45Mb without introducing any significant perceptible difference."}, {"start": 28.12, "end": 33.2, "text": " That is close to a whopping 500 times more efficient representation."}, {"start": 33.2, "end": 34.84, "text": " Hmm..."}, {"start": 34.84, "end": 40.2, "text": " This improved representation not only helps easing the storage requirements of these assets,"}, {"start": 40.2, "end": 45.56, "text": " but it also makes the rendering times, or in other words, the process of creating these images"}, {"start": 45.56, "end": 50.36, "text": " via light simulation programs typically more than twice as fast to process."}, {"start": 50.36, "end": 54.32, "text": " That is a ton of money and time saved for the artists."}, {"start": 54.32, "end": 58.8, "text": " An important keyword in this piece of work is anisotropic scattering."}, {"start": 58.8, "end": 60.480000000000004, "text": " So what does this mean exactly?"}, {"start": 60.480000000000004, "end": 65.52, "text": " The scattering part means that we have to imagine these materials not as a surface,"}, {"start": 65.52, "end": 70.12, "text": " but as a volume in which rays of light bounce around and get absorbed."}, {"start": 70.12, "end": 74.96000000000001, "text": " If we render a piece of cloth made of velvet, twill, or a similar material,"}, {"start": 74.96000000000001, "end": 78.0, "text": " there are lots of microscopic differences in the surface,"}, {"start": 78.0, "end": 84.32, "text": " so much so that it is insufficient to treat them as a solid surface, such as wood or metals."}, {"start": 84.32, "end": 86.6, "text": " We have to think about them as volumes."}, {"start": 86.6, "end": 88.12, "text": " This is the scattering part."}, {"start": 88.12, "end": 92.76, "text": " The anisotropy means that light can scatter unevenly in this medium."}, {"start": 92.76, "end": 97.12, "text": " These rays don't bounce around in all directions with equal probability."}, {"start": 97.12, "end": 101.8, "text": " This means that there is significant forward and backward scattering in this media,"}, {"start": 101.8, "end": 107.96000000000001, "text": " making it even more difficult to create more optimized algorithms that simplify these scattering equations."}, {"start": 107.96, "end": 114.47999999999999, "text": " If you look below here, you'll see these colorful images that researchers like to call different images."}, {"start": 114.47999999999999, "end": 120.32, "text": " It basically means that we create one image with the perfectly accurate technique as a reference."}, {"start": 120.32, "end": 124.52, "text": " As expected, this reference image probably takes forever to compute,"}, {"start": 124.52, "end": 127.24, "text": " but is important to have as a yardstick."}, {"start": 127.24, "end": 133.04, "text": " Then, we compute one image with the proposed technique that is usually significantly faster."}, {"start": 133.04, "end": 138.2, "text": " So we have these two images, and sometimes the differences are so difficult to see,"}, {"start": 138.2, "end": 141.32, "text": " we have no way of knowing where the inaccuracies are."}, {"start": 141.32, "end": 144.79999999999998, "text": " So what we do is subtract the two images from each other"}, {"start": 144.79999999999998, "end": 147.44, "text": " and assign a color coding for the differences."}, {"start": 147.44, "end": 155.0, "text": " As the error may be spatially varying, this is super useful because we can recognize exactly where the information is lost."}, {"start": 155.0, "end": 159.64, "text": " The angrier the colors are, the higher the error is in a given region."}, {"start": 159.64, "end": 168.6, "text": " As you can see, the proposed technique is significantly more accurate in representing this medium than a naive method using the same amount of storage."}, {"start": 168.6, "end": 171.48, "text": " This paper is extraordinarily well written."}, {"start": 171.48, "end": 176.39999999999998, "text": " It is one of the finest pieces of craftsmanship I've come along in a long while,"}, {"start": 176.39999999999998, "end": 179.16, "text": " and yes, it is a crime not having a look at it."}, {"start": 179.16, "end": 184.72, "text": " Also, if you like this episode, make sure to subscribe to the series and check out our other videos."}, {"start": 184.72, "end": 189.44, "text": " We have more than 150 episodes for you ready to go right now."}, {"start": 189.44, "end": 192.6, "text": " You'll love it, and there will be lots of fun to be had."}, {"start": 192.6, "end": 220.76, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HTUxsrO-P_8
Deep Photo Style Transfer | Two Minute Papers #150
The paper "Deep Photo Style Transfer" is and its source code is available here: https://arxiv.org/pdf/1703.07511.pdf https://github.com/luanfujun/deep-photo-styletransfer One more different implementation: https://github.com/martinbenson/deep-photo-styletransfer Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Distill: http://distill.pub/ Distill article on research debt: http://distill.pub/2017/research-debt/ Recommended for you: How Do Neural Networks See The World? - https://www.youtube.com/watch?v=hBobYd8nNtQ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, e, Esa Turkulainen, Michael Albrecht, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1598418/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. Let's have a look at this majestic technique that is about style transfer for photos. Style transfer is a magical algorithm where we have one photograph with content and one with an interesting style, and the output is a third image with these two photos fused together. This is typically achieved by a classical machine learning technique that we call a convolution on your own network. The more layers these networks contain, the more powerful they are, and the more capable they are in building an intuitive understanding of an image. We had several earlier episodes on visualizing the inner workings of these neural networks, as always, the links are available in the video description. Don't miss out, I'm sure you'll be as amazed by the results as I was when I first seen them. These previous neural style transfer techniques work amazingly well if we are looking for a painterly result. However, for photo style transfer, the close-ups here reveal that they introduce unnecessary distortions to the image. They won't look realistic anymore. But not with this new one. Have a look at these results. This is absolute insanity. They are just right in some sense. There is an elusive quality to them. And this is the challenge. We not only have to put what we are searching for into words, but we have to find a mathematical description of these words to make the computer executed. So what would this definition be? Just think about this. This is a really challenging question. The author decided that the photorealism of the output image is to be maximized. Well this sounds great, but who really knows a rigorous mathematical description of photorealism. One possible solution would be to stipulate that the changes in the output color would have to preserve the ratios and distances of the input style colors. Similar rules are used in linear algebra and computer graphics to make sure shapes don't get distorted as we are tormenting them with rotations, translations, and more. We like to call these operations affine transformations. So the fully scientific description would be that we add a regularization term that stipulates that these colors only undergo affine transformations. And we've used one more new word here. What does this regularization term mean? This means that there are a ton of different possible solutions for transferring the colors and we are trying to steer the optimizer towards solutions that adhere to some additional criterion in our case the affine transformations. In the mathematical description of this problem, these additional stipulations appear in the form of a regularization term. I am so happy that you fellow scholars have been watching too many papers for so long that we can finally talk about techniques like this. It's fantastic to have an audience that has this level of understanding of these topics. Love it. Just absolutely love it. The source code of this project is also available. Also, make sure to have a look at this still, an absolutely amazing new science journal from the Google Brain team. That this is no ordinary journal because what they are looking for is not necessarily novel techniques, but novel and intuitive ways of explaining already existing works. There is also an excellent write-up on research that can almost be understood as a manifesto for this journal. A worthy reading deed. I love this new initiative and I am sure we'll hear about this journal a lot in the near future. Make sure to have a look. There is a link to all of these in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir."}, {"start": 5.0, "end": 10.8, "text": " Let's have a look at this majestic technique that is about style transfer for photos."}, {"start": 10.8, "end": 16.36, "text": " Style transfer is a magical algorithm where we have one photograph with content and one"}, {"start": 16.36, "end": 21.48, "text": " with an interesting style, and the output is a third image with these two photos fused"}, {"start": 21.48, "end": 22.48, "text": " together."}, {"start": 22.48, "end": 27.36, "text": " This is typically achieved by a classical machine learning technique that we call a convolution"}, {"start": 27.36, "end": 28.72, "text": " on your own network."}, {"start": 28.72, "end": 33.8, "text": " The more layers these networks contain, the more powerful they are, and the more capable"}, {"start": 33.8, "end": 37.36, "text": " they are in building an intuitive understanding of an image."}, {"start": 37.36, "end": 42.32, "text": " We had several earlier episodes on visualizing the inner workings of these neural networks,"}, {"start": 42.32, "end": 45.519999999999996, "text": " as always, the links are available in the video description."}, {"start": 45.519999999999996, "end": 50.2, "text": " Don't miss out, I'm sure you'll be as amazed by the results as I was when I first"}, {"start": 50.2, "end": 51.2, "text": " seen them."}, {"start": 51.2, "end": 56.239999999999995, "text": " These previous neural style transfer techniques work amazingly well if we are looking for"}, {"start": 56.239999999999995, "end": 57.72, "text": " a painterly result."}, {"start": 57.72, "end": 63.44, "text": " However, for photo style transfer, the close-ups here reveal that they introduce unnecessary"}, {"start": 63.44, "end": 65.0, "text": " distortions to the image."}, {"start": 65.0, "end": 68.28, "text": " They won't look realistic anymore."}, {"start": 68.28, "end": 69.64, "text": " But not with this new one."}, {"start": 69.64, "end": 71.36, "text": " Have a look at these results."}, {"start": 71.36, "end": 73.6, "text": " This is absolute insanity."}, {"start": 73.6, "end": 76.32, "text": " They are just right in some sense."}, {"start": 76.32, "end": 78.6, "text": " There is an elusive quality to them."}, {"start": 78.6, "end": 80.08, "text": " And this is the challenge."}, {"start": 80.08, "end": 85.72, "text": " We not only have to put what we are searching for into words, but we have to find a mathematical"}, {"start": 85.72, "end": 89.52, "text": " description of these words to make the computer executed."}, {"start": 89.52, "end": 91.88, "text": " So what would this definition be?"}, {"start": 91.88, "end": 93.24, "text": " Just think about this."}, {"start": 93.24, "end": 96.08, "text": " This is a really challenging question."}, {"start": 96.08, "end": 101.52, "text": " The author decided that the photorealism of the output image is to be maximized."}, {"start": 101.52, "end": 107.8, "text": " Well this sounds great, but who really knows a rigorous mathematical description of photorealism."}, {"start": 107.8, "end": 112.92, "text": " One possible solution would be to stipulate that the changes in the output color would"}, {"start": 112.92, "end": 118.16, "text": " have to preserve the ratios and distances of the input style colors."}, {"start": 118.16, "end": 123.32000000000001, "text": " Similar rules are used in linear algebra and computer graphics to make sure shapes don't"}, {"start": 123.32000000000001, "end": 128.76, "text": " get distorted as we are tormenting them with rotations, translations, and more."}, {"start": 128.76, "end": 132.24, "text": " We like to call these operations affine transformations."}, {"start": 132.24, "end": 138.08, "text": " So the fully scientific description would be that we add a regularization term that stipulates"}, {"start": 138.08, "end": 142.08, "text": " that these colors only undergo affine transformations."}, {"start": 142.08, "end": 144.92000000000002, "text": " And we've used one more new word here."}, {"start": 144.92000000000002, "end": 147.36, "text": " What does this regularization term mean?"}, {"start": 147.36, "end": 152.52, "text": " This means that there are a ton of different possible solutions for transferring the colors"}, {"start": 152.52, "end": 158.0, "text": " and we are trying to steer the optimizer towards solutions that adhere to some additional"}, {"start": 158.0, "end": 161.96, "text": " criterion in our case the affine transformations."}, {"start": 161.96, "end": 166.64000000000001, "text": " In the mathematical description of this problem, these additional stipulations appear in the"}, {"start": 166.64000000000001, "end": 168.84, "text": " form of a regularization term."}, {"start": 168.84, "end": 174.28, "text": " I am so happy that you fellow scholars have been watching too many papers for so long that"}, {"start": 174.28, "end": 177.04, "text": " we can finally talk about techniques like this."}, {"start": 177.04, "end": 182.12, "text": " It's fantastic to have an audience that has this level of understanding of these topics."}, {"start": 182.12, "end": 183.12, "text": " Love it."}, {"start": 183.12, "end": 185.2, "text": " Just absolutely love it."}, {"start": 185.2, "end": 188.0, "text": " The source code of this project is also available."}, {"start": 188.0, "end": 193.28, "text": " Also, make sure to have a look at this still, an absolutely amazing new science journal"}, {"start": 193.28, "end": 194.88, "text": " from the Google Brain team."}, {"start": 194.88, "end": 199.6, "text": " That this is no ordinary journal because what they are looking for is not necessarily novel"}, {"start": 199.6, "end": 205.2, "text": " techniques, but novel and intuitive ways of explaining already existing works."}, {"start": 205.2, "end": 210.48, "text": " There is also an excellent write-up on research that can almost be understood as a manifesto"}, {"start": 210.48, "end": 211.48, "text": " for this journal."}, {"start": 211.48, "end": 213.07999999999998, "text": " A worthy reading deed."}, {"start": 213.07999999999998, "end": 217.88, "text": " I love this new initiative and I am sure we'll hear about this journal a lot in the near"}, {"start": 217.88, "end": 218.88, "text": " future."}, {"start": 218.88, "end": 219.88, "text": " Make sure to have a look."}, {"start": 219.88, "end": 222.44, "text": " There is a link to all of these in the video description."}, {"start": 222.44, "end": 226.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=u9UUWqVquXo
AI Creates 3D Models From Faces | Two Minute Papers #149
The paper "Photorealistic Facial Texture Inference Using Deep Neural Networks" is available here: http://www.hao-li.com/Hao_Li/Hao_Li_-_publications.html http://arxiv.org/pdf/1612.00523v1.pdf Two Minute Papers Merch: US: http://twominutepapers.com/ EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Earlier episode on texture synthesis: https://www.youtube.com/watch?v=8u3Hkbev2Gg PatchMatch: https://www.youtube.com/watch?v=n3aoc36V8LM WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Esa Turkulainen, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1961529/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karo Ejona Efeira. How could it be to be able to place a character representing us in a digital film or a computer game? Of course, it would clearly be an extremely laborious task to digitize the 3D geometry and the Albedo map of our face. This Albedo map means a texture, a colored pattern that describes how our skin reflects and absorbs light. Capturing such a representation is clearly a very lengthy and expensive process, so get this completely crazy idea. This technique creates this full digital representation of any face from no more than one simple photograph. We can even get historical figures in our digital universe, all we need is one photograph of them. And now, feast your eyes on these incredible results. After taking a photograph, this technique creates two of these Albedo maps. One is a complete, low-frequency map which records the entirety of the face but only contains the rough details. The other Albedo map contains finer details, but in return, it's incomplete. Do you remember the textures synthesis methods that we discussed earlier in the series? The input was a tiny patch of image with a repetitive structure and after learning the statistical properties of these structures, it was possible to continue them indefinitely. The key insight is that we can also do something akin to that here as well. We take this incomplete Albedo map and try to synthesize the missing details. Pretty amazing idea indeed. The authors of the paper invoke a classical learning algorithm, a convolutional neural network to accomplish that. The deeper the neural network we use, the more high frequency details appear on the outputs or in other words, the crisper the image we get. In the paper, you will find a detailed description of their crowdsource user study that was used to validate this technique, including the user interface and the questions being asked. There are also some comparisons against patch match. One of the landmark techniques for texture synthesis that we have also talked about in an earlier episode. It's pretty amazing to see these two-minute papers knowledge-based grow and get more and more intertwined. I hope you're enjoying the process as much as I do. Also, due to popular requests, the two-minute paper's t-shirts are now available. This time, we are using a different service for printing these shirts. Please give us some feedback on how you liked it. I've put my email address in the video description. If you attach a photo of yourself wearing some cool two-minute papers merch, we'll be even more delighted. Just open two-minute papers.com and you'll immediately have access to it. This link will bring you to the service that ships to the US. The link for shipping outside the US is available in the video description. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two minute papers with Karo Ejona Efeira."}, {"start": 4.64, "end": 11.92, "text": " How could it be to be able to place a character representing us in a digital film or a computer game?"}, {"start": 11.92, "end": 18.28, "text": " Of course, it would clearly be an extremely laborious task to digitize the 3D geometry and the"}, {"start": 18.28, "end": 20.28, "text": " Albedo map of our face."}, {"start": 20.28, "end": 26.76, "text": " This Albedo map means a texture, a colored pattern that describes how our skin reflects"}, {"start": 26.76, "end": 28.560000000000002, "text": " and absorbs light."}, {"start": 28.56, "end": 34.04, "text": " Capturing such a representation is clearly a very lengthy and expensive process, so get"}, {"start": 34.04, "end": 36.519999999999996, "text": " this completely crazy idea."}, {"start": 36.519999999999996, "end": 41.56, "text": " This technique creates this full digital representation of any face from no more than"}, {"start": 41.56, "end": 43.6, "text": " one simple photograph."}, {"start": 43.6, "end": 49.04, "text": " We can even get historical figures in our digital universe, all we need is one photograph"}, {"start": 49.04, "end": 50.04, "text": " of them."}, {"start": 50.04, "end": 54.120000000000005, "text": " And now, feast your eyes on these incredible results."}, {"start": 54.12, "end": 58.8, "text": " After taking a photograph, this technique creates two of these Albedo maps."}, {"start": 58.8, "end": 64.88, "text": " One is a complete, low-frequency map which records the entirety of the face but only contains"}, {"start": 64.88, "end": 66.47999999999999, "text": " the rough details."}, {"start": 66.47999999999999, "end": 71.8, "text": " The other Albedo map contains finer details, but in return, it's incomplete."}, {"start": 71.8, "end": 76.67999999999999, "text": " Do you remember the textures synthesis methods that we discussed earlier in the series?"}, {"start": 76.67999999999999, "end": 81.6, "text": " The input was a tiny patch of image with a repetitive structure and after learning"}, {"start": 81.6, "end": 87.52, "text": " the statistical properties of these structures, it was possible to continue them indefinitely."}, {"start": 87.52, "end": 92.28, "text": " The key insight is that we can also do something akin to that here as well."}, {"start": 92.28, "end": 97.28, "text": " We take this incomplete Albedo map and try to synthesize the missing details."}, {"start": 97.28, "end": 99.6, "text": " Pretty amazing idea indeed."}, {"start": 99.6, "end": 104.56, "text": " The authors of the paper invoke a classical learning algorithm, a convolutional neural"}, {"start": 104.56, "end": 106.24, "text": " network to accomplish that."}, {"start": 106.24, "end": 111.52, "text": " The deeper the neural network we use, the more high frequency details appear on the outputs"}, {"start": 111.52, "end": 114.44, "text": " or in other words, the crisper the image we get."}, {"start": 114.44, "end": 119.84, "text": " In the paper, you will find a detailed description of their crowdsource user study that was used"}, {"start": 119.84, "end": 124.8, "text": " to validate this technique, including the user interface and the questions being asked."}, {"start": 124.8, "end": 127.96, "text": " There are also some comparisons against patch match."}, {"start": 127.96, "end": 132.92, "text": " One of the landmark techniques for texture synthesis that we have also talked about in an earlier"}, {"start": 132.92, "end": 133.92, "text": " episode."}, {"start": 133.92, "end": 138.28, "text": " It's pretty amazing to see these two-minute papers knowledge-based grow and get more"}, {"start": 138.28, "end": 140.0, "text": " and more intertwined."}, {"start": 140.0, "end": 142.92, "text": " I hope you're enjoying the process as much as I do."}, {"start": 142.92, "end": 148.08, "text": " Also, due to popular requests, the two-minute paper's t-shirts are now available."}, {"start": 148.08, "end": 151.8, "text": " This time, we are using a different service for printing these shirts."}, {"start": 151.8, "end": 153.88, "text": " Please give us some feedback on how you liked it."}, {"start": 153.88, "end": 156.52, "text": " I've put my email address in the video description."}, {"start": 156.52, "end": 161.36, "text": " If you attach a photo of yourself wearing some cool two-minute papers merch, we'll be even"}, {"start": 161.36, "end": 162.68, "text": " more delighted."}, {"start": 162.68, "end": 167.2, "text": " Just open two-minute papers.com and you'll immediately have access to it."}, {"start": 167.2, "end": 170.56, "text": " This link will bring you to the service that ships to the US."}, {"start": 170.56, "end": 174.88, "text": " The link for shipping outside the US is available in the video description."}, {"start": 174.88, "end": 177.16, "text": " Thanks for watching and for your generous support."}, {"start": 177.16, "end": 197.12, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1U3YKnuMS7g
AI Learns Geometric Descriptors From Depth Images | Two Minute Papers #148
The paper "3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions" is available here: http://3dmatch.cs.princeton.edu/ Recommended for you: Our earlier episode on Siamese networks - https://www.youtube.com/watch?v=a3sgFQjEfp4 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Esa Turkulainen, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Awesome Two Minute Papers merch: http://twominutepapers.com/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1851258/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karoizhou, Naifahir. Today, we are going to discuss a great piece of work that shows us how efficient and versatile neural network based techniques have become recently. Here, the input is a bunch of RGBD images, which are photographs endowed with depth information, and the output can be a full 3D reconstruction of a scene and much, much more, which we'll see in a moment. This task is typically taken care of by handcrafting descriptors. A descriptor is a specialized representation for doing useful tasks on images and other data structures. For instance, if we seek to build an algorithm to recognize black and white images, a useful descriptor would definitely contain the number of colors that are visible in an image and the list of these colors. Again, these descriptors have been typically handcrafted by scientists for decades, new problem, new descriptors, new papers. But not this time, because here, super effective descriptors are proposed automatically via a learning algorithm, a convolutional neural network, and Xiaomi's networks. This is incredible. Creating such descriptors took extremely smart researchers and years of work on a specific problem and were still often not as good as these ones. By the way, we have discussed Xiaomi's networks in an earlier episode, as always, the link is available in the video description. And as you can imagine, several really cool applications emerge from this. One, when combined with RANDSAC, a technique used to find order in noisy measurement data, it is able to perform 3D scene reconstructions from just a few images, and it completely smokes the competition. Two, pause estimation with bounding boxes. Even a sample of an object, the algorithm is able to recognize not only the shape itself, but also its orientation when given a scene cluttered with other objects. Three, correspondence search is possible. This is really cool. This means that a semantically similar piece of geometry is recognized on different objects. For instance, the algorithm can learn the concept of a handle and recognize the handles on a variety of objects, such as on motorcycles, carriages, chairs, and more. The source code of this project is also available. Yuhu! Neural networks are rapidly establishing supremacy in a number of research fields, and I am so happy to be alive in this age of incredible research progress. Make sure to subscribe to the series and click the bell icon. Some amazing works are coming up in the next few episodes, and there will be lots of fun to be had. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karoizhou, Naifahir."}, {"start": 4.64, "end": 10.84, "text": " Today, we are going to discuss a great piece of work that shows us how efficient and versatile"}, {"start": 10.84, "end": 13.68, "text": " neural network based techniques have become recently."}, {"start": 13.68, "end": 20.32, "text": " Here, the input is a bunch of RGBD images, which are photographs endowed with depth information,"}, {"start": 20.32, "end": 25.38, "text": " and the output can be a full 3D reconstruction of a scene and much, much more, which we'll"}, {"start": 25.38, "end": 26.560000000000002, "text": " see in a moment."}, {"start": 26.56, "end": 30.56, "text": " This task is typically taken care of by handcrafting descriptors."}, {"start": 30.56, "end": 35.96, "text": " A descriptor is a specialized representation for doing useful tasks on images and other"}, {"start": 35.96, "end": 37.28, "text": " data structures."}, {"start": 37.28, "end": 42.4, "text": " For instance, if we seek to build an algorithm to recognize black and white images, a useful"}, {"start": 42.4, "end": 47.239999999999995, "text": " descriptor would definitely contain the number of colors that are visible in an image and"}, {"start": 47.239999999999995, "end": 48.76, "text": " the list of these colors."}, {"start": 48.76, "end": 53.96, "text": " Again, these descriptors have been typically handcrafted by scientists for decades, new"}, {"start": 53.96, "end": 56.96, "text": " problem, new descriptors, new papers."}, {"start": 56.96, "end": 62.64, "text": " But not this time, because here, super effective descriptors are proposed automatically via"}, {"start": 62.64, "end": 67.68, "text": " a learning algorithm, a convolutional neural network, and Xiaomi's networks."}, {"start": 67.68, "end": 69.48, "text": " This is incredible."}, {"start": 69.48, "end": 74.48, "text": " Creating such descriptors took extremely smart researchers and years of work on a specific"}, {"start": 74.48, "end": 78.24000000000001, "text": " problem and were still often not as good as these ones."}, {"start": 78.24000000000001, "end": 83.12, "text": " By the way, we have discussed Xiaomi's networks in an earlier episode, as always, the link"}, {"start": 83.12, "end": 85.44, "text": " is available in the video description."}, {"start": 85.44, "end": 89.84, "text": " And as you can imagine, several really cool applications emerge from this."}, {"start": 89.84, "end": 95.52000000000001, "text": " One, when combined with RANDSAC, a technique used to find order in noisy measurement data,"}, {"start": 95.52000000000001, "end": 100.92, "text": " it is able to perform 3D scene reconstructions from just a few images, and it completely"}, {"start": 100.92, "end": 104.12, "text": " smokes the competition."}, {"start": 104.12, "end": 110.2, "text": " Two, pause estimation with bounding boxes."}, {"start": 110.2, "end": 115.56, "text": " Even a sample of an object, the algorithm is able to recognize not only the shape itself,"}, {"start": 115.56, "end": 120.8, "text": " but also its orientation when given a scene cluttered with other objects."}, {"start": 120.8, "end": 124.32000000000001, "text": " Three, correspondence search is possible."}, {"start": 124.32000000000001, "end": 125.88, "text": " This is really cool."}, {"start": 125.88, "end": 131.56, "text": " This means that a semantically similar piece of geometry is recognized on different objects."}, {"start": 131.56, "end": 136.68, "text": " For instance, the algorithm can learn the concept of a handle and recognize the handles on a"}, {"start": 136.68, "end": 142.68, "text": " variety of objects, such as on motorcycles, carriages, chairs, and more."}, {"start": 142.68, "end": 146.0, "text": " The source code of this project is also available."}, {"start": 146.0, "end": 147.0, "text": " Yuhu!"}, {"start": 147.0, "end": 151.88, "text": " Neural networks are rapidly establishing supremacy in a number of research fields, and I am so"}, {"start": 151.88, "end": 156.12, "text": " happy to be alive in this age of incredible research progress."}, {"start": 156.12, "end": 159.64000000000001, "text": " Make sure to subscribe to the series and click the bell icon."}, {"start": 159.64000000000001, "end": 164.04000000000002, "text": " Some amazing works are coming up in the next few episodes, and there will be lots of fun"}, {"start": 164.04000000000002, "end": 165.04000000000002, "text": " to be had."}, {"start": 165.04, "end": 169.16, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8YWgar0uCF8
Semantic Scene Completion From One Depth Image | Two Minute Papers #147
The paper "Semantic Scene Completion from a Single Depth Image" is available here: http://sscnet.cs.princeton.edu/ Recommended for you: How Does Deep Learning Work? - https://www.youtube.com/watch?v=He4t7Zekob0 Artificial Neural Networks and Deep Learning - https://www.youtube.com/watch?v=rCWTOOgVXyE WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Awesome Two Minute Papers merch: http://twominutepapers.com/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2225414/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. This piece of work is an amazing application of deep neural networks that performs semantic scene completion from only one depth image. This depth image is the colorful image that you see here, where the colors denote how far away different objects are from our camera. We can create these images inexpensively with commodity hardware, for instance Microsoft Kinect has a depth sensor that is suitable for this task. The scene completion part means that from this highly incomplete depth information, the algorithm reconstructs the geometry for the entirety of the room. Even parts that are completely missing from our images are things that are occluded. The output is what computer graphics researchers like to call a volumetric representation or a voxel array, which is essentially a large collection of tiny LEGO pieces that build up the scene. But this is not all because the semantic part means that the algorithm actually understands what we are looking at and thus is able to classify different parts of the scene. These classes include walls, windows, floors, sofas, and other furniture. Previous works were able to do scene completion and geometry classification, but the coolest part of this algorithm is that it not only does these steps way better, but it does them both at the very same time. This work uses a 3D convolutional neural network to accomplish this task. The 3D part is required for this learning algorithm to be able to operate on this kind of volumetric data. As you can see, the results are excellent and are remarkably close to the ground truth data. If you remember, not so long ago, I flipped out when I've seen the first neural network based techniques that understood 3D geometry from 2D images. That technique used a much more complicated architecture, a generative adversarial network, which also didn't do scene completion and on top of that, the resolution of the output was way lower, which intuitively means that the LEGO pieces were much larger. This is insanity. The rate of progress in machine learning research is just stunning. Probably even for you, season fellow scholars who watch too many papers and have high expectations. We've had plenty of previous episodes about the inner workings of different kinds of neural networks. I've put some links to them in the video description. Make sure to have a look if you wish to brush up on your machine learning concfurbit. The authors also published a new data set to solve these kind of problems in future research works. And it is also super useful because the output of their technique can be compared to ground truth data. When new solutions pop up in the future, this data set can be used as a yardstick to compare results with. The source code for this project is also available. Tinkerers rejoice. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.32, "end": 9.76, "text": " This piece of work is an amazing application of deep neural networks that performs semantic"}, {"start": 9.76, "end": 13.44, "text": " scene completion from only one depth image."}, {"start": 13.44, "end": 17.88, "text": " This depth image is the colorful image that you see here, where the colors denote how"}, {"start": 17.88, "end": 21.36, "text": " far away different objects are from our camera."}, {"start": 21.36, "end": 26.560000000000002, "text": " We can create these images inexpensively with commodity hardware, for instance Microsoft"}, {"start": 26.56, "end": 30.279999999999998, "text": " Kinect has a depth sensor that is suitable for this task."}, {"start": 30.279999999999998, "end": 34.96, "text": " The scene completion part means that from this highly incomplete depth information, the"}, {"start": 34.96, "end": 39.32, "text": " algorithm reconstructs the geometry for the entirety of the room."}, {"start": 39.32, "end": 44.599999999999994, "text": " Even parts that are completely missing from our images are things that are occluded."}, {"start": 44.599999999999994, "end": 49.96, "text": " The output is what computer graphics researchers like to call a volumetric representation or"}, {"start": 49.96, "end": 55.28, "text": " a voxel array, which is essentially a large collection of tiny LEGO pieces that build"}, {"start": 55.28, "end": 56.28, "text": " up the scene."}, {"start": 56.28, "end": 61.36, "text": " But this is not all because the semantic part means that the algorithm actually understands"}, {"start": 61.36, "end": 66.2, "text": " what we are looking at and thus is able to classify different parts of the scene."}, {"start": 66.2, "end": 71.92, "text": " These classes include walls, windows, floors, sofas, and other furniture."}, {"start": 71.92, "end": 76.8, "text": " Previous works were able to do scene completion and geometry classification, but the coolest"}, {"start": 76.8, "end": 81.88, "text": " part of this algorithm is that it not only does these steps way better, but it does them"}, {"start": 81.88, "end": 84.36, "text": " both at the very same time."}, {"start": 84.36, "end": 88.92, "text": " This work uses a 3D convolutional neural network to accomplish this task."}, {"start": 88.92, "end": 94.16, "text": " The 3D part is required for this learning algorithm to be able to operate on this kind"}, {"start": 94.16, "end": 95.8, "text": " of volumetric data."}, {"start": 95.8, "end": 101.44, "text": " As you can see, the results are excellent and are remarkably close to the ground truth data."}, {"start": 101.44, "end": 106.4, "text": " If you remember, not so long ago, I flipped out when I've seen the first neural network"}, {"start": 106.4, "end": 111.03999999999999, "text": " based techniques that understood 3D geometry from 2D images."}, {"start": 111.04, "end": 116.68, "text": " That technique used a much more complicated architecture, a generative adversarial network,"}, {"start": 116.68, "end": 121.32000000000001, "text": " which also didn't do scene completion and on top of that, the resolution of the output"}, {"start": 121.32000000000001, "end": 127.12, "text": " was way lower, which intuitively means that the LEGO pieces were much larger."}, {"start": 127.12, "end": 128.72, "text": " This is insanity."}, {"start": 128.72, "end": 133.44, "text": " The rate of progress in machine learning research is just stunning."}, {"start": 133.44, "end": 139.48000000000002, "text": " Probably even for you, season fellow scholars who watch too many papers and have high expectations."}, {"start": 139.48, "end": 144.28, "text": " We've had plenty of previous episodes about the inner workings of different kinds of neural"}, {"start": 144.28, "end": 145.28, "text": " networks."}, {"start": 145.28, "end": 147.64, "text": " I've put some links to them in the video description."}, {"start": 147.64, "end": 152.2, "text": " Make sure to have a look if you wish to brush up on your machine learning concfurbit."}, {"start": 152.2, "end": 157.44, "text": " The authors also published a new data set to solve these kind of problems in future research"}, {"start": 157.44, "end": 158.44, "text": " works."}, {"start": 158.44, "end": 163.32, "text": " And it is also super useful because the output of their technique can be compared to ground"}, {"start": 163.32, "end": 164.6, "text": " truth data."}, {"start": 164.6, "end": 169.07999999999998, "text": " When new solutions pop up in the future, this data set can be used as a yardstick to"}, {"start": 169.08, "end": 170.64000000000001, "text": " compare results with."}, {"start": 170.64000000000001, "end": 173.64000000000001, "text": " The source code for this project is also available."}, {"start": 173.64000000000001, "end": 175.16000000000003, "text": " Tinkerers rejoice."}, {"start": 175.16, "end": 199.44, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=aAsejHZC5EE
Real-Time Modeling and Animation of Climbing Plants | Two Minute Papers #146
Two Minute Papers on Patreon + our technical memos: https://www.patreon.com/TwoMinutePapers https://www.patreon.com/TwoMinutePapers/posts?tag=what%27s%20new The paper "Interactive Modeling and Authoring of Climbing Plants" is available here: http://www.pirk.info/projects/climbing_plants/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Awesome Two Minute Papers merch: http://twominutepapers.com/ Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-413686/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojejona Ifeher. This paper is about interactively modeling and editing, climbing plants. This is one of my favorite kind of works, mundane sounding topic, immaculate execution. There are so many cool things about this paper I don't even know where to start. But first, let's talk about the modeling part. We can, for instance, plant a seed and we can not only have a look at how it grows as time goes by, but we can also influence the grow variability and shoot growth rates. Branches can be added and removed at will at any point in time. We can also add attractors, regions that are set to be more likely for the plant to grow towards. With these techniques, we can easily create any sort of artistic effect, be it highly artificial looking vines and branches, or some long forgotten object overgrown with climbing plants. However, a model is just 3D geometry. What truly makes these models come alive is animation, which is also executed with flying colors. The animations created with this technique are both biologically and physically plausible. So what do these terms mean exactly? Biologically plausible means that the plants grow according to the laws of nature, and physically plausible means that if we start tugging at it, branches start moving, bending and breaking according to the laws of physics. Due to its responsive and interactive nature, the applications of this technique are typically in the domain of architectural visualization, digital storytelling, or any sort of real-time application. And of course, the usual suspects, animated movies, and game developers can use this to create more immersive digital environments with ease. And don't forget about me, Karoy, who would happily play with this basically all day long. If you are one of our many fellow scholars who are completely addicted to two-minute papers, make sure to check out our Patreon page where you can grab cool perks like watching these episodes in early access, or deciding the order of upcoming episodes, and more. Also, your support is extremely helpful, so much so that even the price of a cup of coffee per month helps us to create better videos for you. We write some reports from time to time to assess the improvements we were able to make with your support. The link is in the video description, or you can just click the letter P on the end link screen in a moment. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Karojejona Ifeher."}, {"start": 4.8, "end": 9.8, "text": " This paper is about interactively modeling and editing, climbing plants."}, {"start": 9.8, "end": 15.84, "text": " This is one of my favorite kind of works, mundane sounding topic, immaculate execution."}, {"start": 15.84, "end": 20.04, "text": " There are so many cool things about this paper I don't even know where to start."}, {"start": 20.04, "end": 23.080000000000002, "text": " But first, let's talk about the modeling part."}, {"start": 23.080000000000002, "end": 28.32, "text": " We can, for instance, plant a seed and we can not only have a look at how it grows as"}, {"start": 28.32, "end": 34.64, "text": " time goes by, but we can also influence the grow variability and shoot growth rates."}, {"start": 34.64, "end": 38.24, "text": " Branches can be added and removed at will at any point in time."}, {"start": 38.24, "end": 43.28, "text": " We can also add attractors, regions that are set to be more likely for the plant to grow"}, {"start": 43.28, "end": 44.400000000000006, "text": " towards."}, {"start": 44.400000000000006, "end": 49.08, "text": " With these techniques, we can easily create any sort of artistic effect, be it highly"}, {"start": 49.08, "end": 54.88, "text": " artificial looking vines and branches, or some long forgotten object overgrown with climbing"}, {"start": 54.88, "end": 55.88, "text": " plants."}, {"start": 55.88, "end": 59.24, "text": " However, a model is just 3D geometry."}, {"start": 59.24, "end": 64.8, "text": " What truly makes these models come alive is animation, which is also executed with flying"}, {"start": 64.8, "end": 65.88, "text": " colors."}, {"start": 65.88, "end": 71.4, "text": " The animations created with this technique are both biologically and physically plausible."}, {"start": 71.4, "end": 74.12, "text": " So what do these terms mean exactly?"}, {"start": 74.12, "end": 78.76, "text": " Biologically plausible means that the plants grow according to the laws of nature, and"}, {"start": 78.76, "end": 84.04, "text": " physically plausible means that if we start tugging at it, branches start moving, bending"}, {"start": 84.04, "end": 87.12, "text": " and breaking according to the laws of physics."}, {"start": 87.12, "end": 91.80000000000001, "text": " Due to its responsive and interactive nature, the applications of this technique are typically"}, {"start": 91.80000000000001, "end": 97.64, "text": " in the domain of architectural visualization, digital storytelling, or any sort of real-time"}, {"start": 97.64, "end": 98.64, "text": " application."}, {"start": 98.64, "end": 103.56, "text": " And of course, the usual suspects, animated movies, and game developers can use this to"}, {"start": 103.56, "end": 106.84, "text": " create more immersive digital environments with ease."}, {"start": 106.84, "end": 112.24000000000001, "text": " And don't forget about me, Karoy, who would happily play with this basically all day long."}, {"start": 112.24, "end": 117.11999999999999, "text": " If you are one of our many fellow scholars who are completely addicted to two-minute papers,"}, {"start": 117.11999999999999, "end": 121.75999999999999, "text": " make sure to check out our Patreon page where you can grab cool perks like watching these"}, {"start": 121.75999999999999, "end": 126.8, "text": " episodes in early access, or deciding the order of upcoming episodes, and more."}, {"start": 126.8, "end": 132.51999999999998, "text": " Also, your support is extremely helpful, so much so that even the price of a cup of coffee"}, {"start": 132.51999999999998, "end": 135.79999999999998, "text": " per month helps us to create better videos for you."}, {"start": 135.79999999999998, "end": 140.28, "text": " We write some reports from time to time to assess the improvements we were able to make"}, {"start": 140.28, "end": 141.48, "text": " with your support."}, {"start": 141.48, "end": 145.92, "text": " The link is in the video description, or you can just click the letter P on the end"}, {"start": 145.92, "end": 147.23999999999998, "text": " link screen in a moment."}, {"start": 147.24, "end": 174.68, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lxNEWuO6xQk
Controllable Fluid and Smoke Simulations | Two Minute Papers #145
The paper "Primal-Dual Optimization for Fluids" is available here: http://www.ntoken.com/pubs.html An introduction to fluid simulations and fluid control with source code, both CPU and GPU (OpenCL): 1. https://users.cg.tuwien.ac.at/zsolnai/gfx/fluid_control_msc_thesis/ 2. https://users.cg.tuwien.ac.at/zsolnai/gfx/real_time_fluid_control_eg/ Doyub Kim's book on fluid simulations, with source code: http://doyub.com/ https://twitter.com/doyub?lang=en The first Two Minute Papers episode on Wavelet Turbulence: https://www.youtube.com/watch?v=5xLSbj5SsSE WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1632785/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karoizhou, naifahir. Today, we are going to talk some more about fluid simulations and fluid guiding. Hell yeah! As you know, all too well, it is possible to simulate the loss of fluid motion on a computer, make a digital scene, and create absolutely beautiful videos, such as the ones you see here. Newer and newer research papers show up to both extend the possible scenarios that we can simulate, and there are also other works to speed up already existing solutions. This piece of work introduces a technique that mathematicians like to call the primal dual optimization method. This helps us accomplish two really cool things. One is fluid guiding. Fluid guiding is a problem where we are looking to exert control over the fluid, while keeping the fluid flow as natural as possible. I've written my master thesis on the very same topic and can confirm that it's one hell of a problem. The core of the problem is that if we use the loss of physics to create a fluid simulation, we get what would happen in reality as a result. However, if we wish to guide this piece of fluid towards a target shape, for instance to form an image of our choice, we have to both retain natural fluid flows, but still creates something that would be highly unlikely to happen according to the loss of physics. For instance, a splash of water is unlikely to suddenly form a human face of our choice. The proposed technique helps this ambivalent goal of exerting a bit of control over the fluid simulation, while keeping the flows as natural as possible. There are already many existing applications of fluids and smoke in movies, where an actor fires a gun and the fire and smoke plumes are added to the footage in post-production. However, with a high-quality fluid guiding technique, we could choose target shapes for these smoke plumes and explosions that best convey our artistic vision. And number two, it also accomplishes something that we call separating boundary conditions, which prevents imprecisions where small fluid volumes are being stuck to walls. The guiding process is also followed by an upsand-link step, where we take a core simulation and artificially synthesize sharp, high-frequency details onto it. Computing the more detailed simulation would often take days without such synthesizing techniques, kind of like with wavelet turbulence, which is an absolutely incredible paper that was showcased in none other than the very first two-minute paper's episode. Link is in the video description box. Don't watch it. It's quite embarrassing. And all this leads to eye-poppingly beautiful solutions. Wow, I cannot get tired of this. In the paper, you will find much more about breaking dams, tornado simulations, and applications of the primal dual optimization method. Normally, to remain as authentic to the source materials as possible, I don't do any kind of slow motion and other similar shenanigans, but this time I just couldn't resist it. Have a look at this, and I hope you'll like the results. If you feel the alluring call of fluids, I've put some resources in the video description, including a gentle description I wrote on the basics of fluid simulation and fluid control, with source code, both on the CPU and GPU, and the link to Doyub Kim's amazing book that I'm currently reading. Highly recommended. If you also have some online tutorials and papers that help you solidify your understanding of the topic, make sure to leave a link in the comments. I'll include the best ones in the video description. If you would like to see more episodes like this one, make sure to subscribe to two-minute papers. We would be more than happy to have you along on our journey of science. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.42, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karoizhou, naifahir."}, {"start": 4.42, "end": 9.38, "text": " Today, we are going to talk some more about fluid simulations and fluid guiding."}, {"start": 9.38, "end": 10.28, "text": " Hell yeah!"}, {"start": 10.28, "end": 15.88, "text": " As you know, all too well, it is possible to simulate the loss of fluid motion on a computer,"}, {"start": 15.88, "end": 21.740000000000002, "text": " make a digital scene, and create absolutely beautiful videos, such as the ones you see here."}, {"start": 21.740000000000002, "end": 28.18, "text": " Newer and newer research papers show up to both extend the possible scenarios that we can simulate,"}, {"start": 28.18, "end": 32.38, "text": " and there are also other works to speed up already existing solutions."}, {"start": 32.38, "end": 39.22, "text": " This piece of work introduces a technique that mathematicians like to call the primal dual optimization method."}, {"start": 39.22, "end": 42.519999999999996, "text": " This helps us accomplish two really cool things."}, {"start": 42.519999999999996, "end": 44.66, "text": " One is fluid guiding."}, {"start": 44.66, "end": 49.46, "text": " Fluid guiding is a problem where we are looking to exert control over the fluid,"}, {"start": 49.46, "end": 52.74, "text": " while keeping the fluid flow as natural as possible."}, {"start": 52.74, "end": 58.86, "text": " I've written my master thesis on the very same topic and can confirm that it's one hell of a problem."}, {"start": 58.86, "end": 63.540000000000006, "text": " The core of the problem is that if we use the loss of physics to create a fluid simulation,"}, {"start": 63.540000000000006, "end": 66.62, "text": " we get what would happen in reality as a result."}, {"start": 66.62, "end": 70.82000000000001, "text": " However, if we wish to guide this piece of fluid towards a target shape,"}, {"start": 70.82000000000001, "end": 76.54, "text": " for instance to form an image of our choice, we have to both retain natural fluid flows,"}, {"start": 76.54, "end": 82.74000000000001, "text": " but still creates something that would be highly unlikely to happen according to the loss of physics."}, {"start": 82.74000000000001, "end": 88.58000000000001, "text": " For instance, a splash of water is unlikely to suddenly form a human face of our choice."}, {"start": 88.58000000000001, "end": 94.46000000000001, "text": " The proposed technique helps this ambivalent goal of exerting a bit of control over the fluid simulation,"}, {"start": 94.46000000000001, "end": 97.58000000000001, "text": " while keeping the flows as natural as possible."}, {"start": 97.58000000000001, "end": 102.18, "text": " There are already many existing applications of fluids and smoke in movies,"}, {"start": 102.18, "end": 108.54, "text": " where an actor fires a gun and the fire and smoke plumes are added to the footage in post-production."}, {"start": 108.54, "end": 111.62, "text": " However, with a high-quality fluid guiding technique,"}, {"start": 111.62, "end": 118.26, "text": " we could choose target shapes for these smoke plumes and explosions that best convey our artistic vision."}, {"start": 118.26, "end": 124.26, "text": " And number two, it also accomplishes something that we call separating boundary conditions,"}, {"start": 124.26, "end": 136.74, "text": " which prevents imprecisions where small fluid volumes are being stuck to walls."}, {"start": 136.74, "end": 140.42000000000002, "text": " The guiding process is also followed by an upsand-link step,"}, {"start": 140.42000000000002, "end": 147.06, "text": " where we take a core simulation and artificially synthesize sharp, high-frequency details onto it."}, {"start": 147.06, "end": 152.54000000000002, "text": " Computing the more detailed simulation would often take days without such synthesizing techniques,"}, {"start": 152.54, "end": 158.06, "text": " kind of like with wavelet turbulence, which is an absolutely incredible paper that was showcased"}, {"start": 158.06, "end": 161.98, "text": " in none other than the very first two-minute paper's episode."}, {"start": 161.98, "end": 166.7, "text": " Link is in the video description box. Don't watch it. It's quite embarrassing."}, {"start": 166.7, "end": 173.29999999999998, "text": " And all this leads to eye-poppingly beautiful solutions. Wow, I cannot get tired of this."}, {"start": 173.29999999999998, "end": 178.29999999999998, "text": " In the paper, you will find much more about breaking dams, tornado simulations,"}, {"start": 178.29999999999998, "end": 181.73999999999998, "text": " and applications of the primal dual optimization method."}, {"start": 181.74, "end": 185.82000000000002, "text": " Normally, to remain as authentic to the source materials as possible,"}, {"start": 185.82000000000002, "end": 189.66, "text": " I don't do any kind of slow motion and other similar shenanigans,"}, {"start": 189.66, "end": 194.46, "text": " but this time I just couldn't resist it. Have a look at this, and I hope you'll like the results."}, {"start": 197.58, "end": 203.26000000000002, "text": " If you feel the alluring call of fluids, I've put some resources in the video description,"}, {"start": 203.26000000000002, "end": 208.38, "text": " including a gentle description I wrote on the basics of fluid simulation and fluid control,"}, {"start": 208.38, "end": 214.78, "text": " with source code, both on the CPU and GPU, and the link to Doyub Kim's amazing book that I'm"}, {"start": 214.78, "end": 219.82, "text": " currently reading. Highly recommended. If you also have some online tutorials and papers that"}, {"start": 219.82, "end": 224.46, "text": " help you solidify your understanding of the topic, make sure to leave a link in the comments."}, {"start": 224.46, "end": 229.01999999999998, "text": " I'll include the best ones in the video description. If you would like to see more episodes like this"}, {"start": 229.01999999999998, "end": 233.82, "text": " one, make sure to subscribe to two-minute papers. We would be more than happy to have you along"}, {"start": 233.82, "end": 239.9, "text": " on our journey of science. Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lf3ViWEeKqc
On-the-Fly 3D Printing While Modeling | Two Minute Papers #144
The paper "On-the-Fly Print: Incremental Printing While Modeling" is available here: http://www.huaishu.me/projects/on-the-fly.html http://www.cs.cornell.edu/projects/wireprint/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fehir. You are going to love this killer paper. In the classical case of 3D fabrication, we first create the 3D geometry in our modeling software on a computer. Then, after we are done, we send this model to a 3D printer to create a physical copy of it. If we don't like an aspect of this printed model, we have to go back to the computer and adjust accordingly. If there are more fundamental issues, we may even have to start over. And get this, with this piece of work, we can have a low fidelity wireframe version printed immediately as we make changes within the modeling software. This process we can refer to as real time or on the fly 3D printing. In this work, both the hardware design and the algorithm that runs the printer is described. This approach has a number of benefits and, of course, a huge set of challenges. For instance, we can immediately see the result of our decisions and can test whether a new piece of equipment would correctly fit into the scene we are designing. Sometimes, depending on the geometry of the final object, different jobs need to be reordered to get a plan that is physically plausible to print. In this example, the bottom branch was designed by the artist first and the top branch afterwards. But their order has to be changed, otherwise, the bottom branch would block the way to the top branch. The algorithm recognizes these cases and reorders the printing jobs correctly. Quite remarkable. And, an alternative solution for rotating the object around for better reachability is also demonstrated. Because of the fact that the algorithm is capable of this sort of decision-making, we don't even need to wait for the printer to finish a given step and can remain focused on the modeling process. Also, the handle of the teapot here collides with the body. Because of the limitations of wireframe modeling, such cases have to be detected and omitted. Connecting patches and adding differently sized holes to a model are also highly non-trivial problems that are all addressed in the paper. And this piece of work is also a testament to the potential of solutions where hardware and software is designed with each other in mind. I can only imagine how many work hours were put in this project until the final working solution was obtained. Incredible work indeed. We really just scratched the surface in this episode. Make sure to go to the video description and have a look at the paper for more details. It's definitely worth it. Also, if you enjoyed this two-minute paper's episode, make sure to subscribe to the series and if you are subscribed, click the bell icon to never miss an episode. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 7.3, "text": " You are going to love this killer paper."}, {"start": 7.3, "end": 14.8, "text": " In the classical case of 3D fabrication, we first create the 3D geometry in our modeling software on a computer."}, {"start": 14.8, "end": 21.0, "text": " Then, after we are done, we send this model to a 3D printer to create a physical copy of it."}, {"start": 21.0, "end": 26.8, "text": " If we don't like an aspect of this printed model, we have to go back to the computer and adjust accordingly."}, {"start": 26.8, "end": 30.7, "text": " If there are more fundamental issues, we may even have to start over."}, {"start": 30.7, "end": 40.6, "text": " And get this, with this piece of work, we can have a low fidelity wireframe version printed immediately as we make changes within the modeling software."}, {"start": 40.6, "end": 45.7, "text": " This process we can refer to as real time or on the fly 3D printing."}, {"start": 45.7, "end": 51.400000000000006, "text": " In this work, both the hardware design and the algorithm that runs the printer is described."}, {"start": 51.400000000000006, "end": 55.900000000000006, "text": " This approach has a number of benefits and, of course, a huge set of challenges."}, {"start": 55.9, "end": 65.3, "text": " For instance, we can immediately see the result of our decisions and can test whether a new piece of equipment would correctly fit into the scene we are designing."}, {"start": 65.3, "end": 74.3, "text": " Sometimes, depending on the geometry of the final object, different jobs need to be reordered to get a plan that is physically plausible to print."}, {"start": 74.3, "end": 79.9, "text": " In this example, the bottom branch was designed by the artist first and the top branch afterwards."}, {"start": 79.9, "end": 86.0, "text": " But their order has to be changed, otherwise, the bottom branch would block the way to the top branch."}, {"start": 86.0, "end": 90.80000000000001, "text": " The algorithm recognizes these cases and reorders the printing jobs correctly."}, {"start": 90.80000000000001, "end": 92.30000000000001, "text": " Quite remarkable."}, {"start": 92.30000000000001, "end": 99.0, "text": " And, an alternative solution for rotating the object around for better reachability is also demonstrated."}, {"start": 99.0, "end": 103.7, "text": " Because of the fact that the algorithm is capable of this sort of decision-making,"}, {"start": 103.7, "end": 110.7, "text": " we don't even need to wait for the printer to finish a given step and can remain focused on the modeling process."}, {"start": 110.7, "end": 114.60000000000001, "text": " Also, the handle of the teapot here collides with the body."}, {"start": 114.60000000000001, "end": 119.80000000000001, "text": " Because of the limitations of wireframe modeling, such cases have to be detected and omitted."}, {"start": 122.80000000000001, "end": 131.4, "text": " Connecting patches and adding differently sized holes to a model are also highly non-trivial problems that are all addressed in the paper."}, {"start": 131.4, "end": 140.0, "text": " And this piece of work is also a testament to the potential of solutions where hardware and software is designed with each other in mind."}, {"start": 140.0, "end": 146.70000000000002, "text": " I can only imagine how many work hours were put in this project until the final working solution was obtained."}, {"start": 146.70000000000002, "end": 148.5, "text": " Incredible work indeed."}, {"start": 148.5, "end": 151.3, "text": " We really just scratched the surface in this episode."}, {"start": 151.3, "end": 155.70000000000002, "text": " Make sure to go to the video description and have a look at the paper for more details."}, {"start": 155.70000000000002, "end": 157.4, "text": " It's definitely worth it."}, {"start": 157.4, "end": 163.6, "text": " Also, if you enjoyed this two-minute paper's episode, make sure to subscribe to the series and if you are subscribed,"}, {"start": 163.6, "end": 166.4, "text": " click the bell icon to never miss an episode."}, {"start": 166.4, "end": 188.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1SHW1-qKKpY
Real-Time Oil Painting on Mobile | Two Minute Papers #143
The paper "Real-Time Oil Painting on Mobile Hardware" is available here: http://graphics.cs.kuleuven.be/publications/SD2016RTOPOMH/index.html In addition: It is mentioned that mobile devices typically have a lower resolution display than desktop computers. While this is true, a more important limiting factor is screen real estate, and the fact that a resolution of the simulation is significantly lower on a phone given the vast differences in processing power. These are challenging limitations that are difficult to overcome. WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credits: https://pixabay.com/photo-1125445/ https://pixabay.com/photo-1138275/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejol Naifahir. It's been quite a while since we've talked about a paper on fluid simulations. Since my withdrawal symptoms are already kicking in, today I simply have to talk to you about this amazing paint simulator program that runs in real-time and on our mobile devices. These handheld devices typically have a lower image resolution compared to desktop computers, therefore it is indeed a challenge to put together a solution that artists can use to create detailed paintings with. And to accomplish this, this piece of work offers several killer features. For instance, the paint pigment concentration can be adjusted. The direction of the brushstrokes is also controllable. And third, this technique is powered by a viscoelastic shallow water simulator that also supports simulating multiple layers of paint. This is particularly challenging as the inner paint layers may have already dried when adding a new wet layer on top of them. This all has to be simulated in a way that is physically plausible. But we are not done yet. With many different kinds of paint types that we are using, the overall outlook of our paintings are dramatically different depending on the lighting conditions around them. To take this effect into consideration, this technique also has an intuitive feature where the effect of virtual light sources is also simulated and the output is changed interactively as we tilt the tablet around. And get this, gravity is also simulated and the paint trickles down depending on the orientation of our tablet according to the laws of physics. Really cool. The paper also shows visual comparisons against similar algorithms. And clearly, artists who work with these substances all day know exactly how they should behave in reality. So the ultimate challenge is always to give it to them and ask them whether they have enjoyed the workflow and found the simulation faithful to reality. Let the artist speed the judge. The user study presented in the paper revealed that the artists loved the user experience and they expressed that it's second to none for testing ideas. I am sure that with a few improvements, this could be the ultimate tool for artists to unleash their creative potential while sitting outside and getting inspired by nature. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejol Naifahir."}, {"start": 4.32, "end": 8.72, "text": " It's been quite a while since we've talked about a paper on fluid simulations."}, {"start": 8.72, "end": 14.08, "text": " Since my withdrawal symptoms are already kicking in, today I simply have to talk to you"}, {"start": 14.08, "end": 20.8, "text": " about this amazing paint simulator program that runs in real-time and on our mobile devices."}, {"start": 20.8, "end": 26.400000000000002, "text": " These handheld devices typically have a lower image resolution compared to desktop computers,"}, {"start": 26.4, "end": 32.32, "text": " therefore it is indeed a challenge to put together a solution that artists can use to create detailed"}, {"start": 32.32, "end": 38.239999999999995, "text": " paintings with. And to accomplish this, this piece of work offers several killer features."}, {"start": 38.239999999999995, "end": 42.0, "text": " For instance, the paint pigment concentration can be adjusted."}, {"start": 42.0, "end": 45.2, "text": " The direction of the brushstrokes is also controllable."}, {"start": 45.2, "end": 51.36, "text": " And third, this technique is powered by a viscoelastic shallow water simulator that also supports"}, {"start": 51.36, "end": 57.04, "text": " simulating multiple layers of paint. This is particularly challenging as the inner paint layers"}, {"start": 57.04, "end": 62.72, "text": " may have already dried when adding a new wet layer on top of them. This all has to be simulated"}, {"start": 62.72, "end": 67.75999999999999, "text": " in a way that is physically plausible. But we are not done yet. With many different kinds of"}, {"start": 67.75999999999999, "end": 73.03999999999999, "text": " paint types that we are using, the overall outlook of our paintings are dramatically different"}, {"start": 73.03999999999999, "end": 77.52, "text": " depending on the lighting conditions around them. To take this effect into consideration,"}, {"start": 77.52, "end": 83.03999999999999, "text": " this technique also has an intuitive feature where the effect of virtual light sources is also"}, {"start": 83.03999999999999, "end": 89.19999999999999, "text": " simulated and the output is changed interactively as we tilt the tablet around. And get this,"}, {"start": 89.19999999999999, "end": 95.67999999999999, "text": " gravity is also simulated and the paint trickles down depending on the orientation of our tablet"}, {"start": 95.67999999999999, "end": 101.52, "text": " according to the laws of physics. Really cool. The paper also shows visual comparisons against"}, {"start": 101.52, "end": 108.16, "text": " similar algorithms. And clearly, artists who work with these substances all day know exactly how"}, {"start": 108.16, "end": 113.19999999999999, "text": " they should behave in reality. So the ultimate challenge is always to give it to them and ask"}, {"start": 113.19999999999999, "end": 118.32, "text": " them whether they have enjoyed the workflow and found the simulation faithful to reality."}, {"start": 118.32, "end": 123.92, "text": " Let the artist speed the judge. The user study presented in the paper revealed that the artists"}, {"start": 123.92, "end": 129.28, "text": " loved the user experience and they expressed that it's second to none for testing ideas."}, {"start": 129.28, "end": 134.56, "text": " I am sure that with a few improvements, this could be the ultimate tool for artists to unleash"}, {"start": 134.56, "end": 140.08, "text": " their creative potential while sitting outside and getting inspired by nature. Thanks for watching"}, {"start": 140.08, "end": 161.36, "text": " and for your generous support. I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UBORpapdAfU
Instant 3D Floorplans From Your Photos | Two Minute Papers #142
The paper "Rent3D: Floor-Plan Priors for Monocular Layout Estimation" is available here: http://www.cs.toronto.edu/~fidler/projects/rent3D.html http://www.cs.toronto.edu/~urtasun/publications/liu_etal_cvpr15.pdf Followup paper - HouseCraft: http://www.cs.toronto.edu/housecraft/ https://github.com/chuhang/HouseCraft WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-354233/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Ifehir. In this piece of work, we are interested in creating a 3D virtual tour for an apartment. However, for this apartment, no 3D information is available. Instead, the input for the algorithm is something that we can obtain easily. In this case, a 2D floor plan and a set of images that we shot in the apartment. From this information, we would create a 3D floor plan that is not only faithful to the real one in terms of geometry, but the photos with the correct viewpoints are also to be assigned to the correct walls. In order to accomplish this, one has to overcome a series of challenging problems. For instance, we have to estimate the layout of each room and find the location of the camera in each of these images. Also, to obtain high quality solutions, the goal is to extract as much information from the inputs as possible. The authors recognized that the floor plans provide way more information than we take for granted. For instance, beyond showing the geometric relation of the rooms, it can also be used to find out the aspect ratios of the floor for each room. The window tour ratios can also be approximated and matched between the photos and the floor plan. This additional information is super useful when trying to find out which room is to be assigned to which part of the 3D floor plan. On just looking at the photos, we also have access to a large swath of learning algorithms that can reliably classify whether we are looking at a bathroom or a living room. There are even more constraints to adhere to in order to aggressively reduce the number of physical configurations. Make sure to have a look at the paper for details. There are lots of cool tricks described there. As always, there is a link to it in the video description. For instance, since the space of possible solutions is still too vast, a branch and bound type algorithm is proposed to further decimate the number of potential solutions to evaluate. And as you can see here, the comparisons against ground truth floor plans reveal that these solutions are indeed quite faithful to reality. The authors also kindly provided a dataset with more than 200 full apartments with well over a thousand photos and annotations for future use in follow-up research works. Creating such a dataset and publishing it is incredibly laborious and could easily be a paper on its own, and here we also get an excellent solution for this problem as well. In a separate work, the authors also published a different version of this problem formulation that reconstructs the exterior of buildings in a similar manner. There is so much to explore. The links are available in the video description. Make sure to have a look. In case you're wondering, it's still considered a crime not doing that. I hope you have enjoyed this episode and I find it so delightful to see this unbelievably rapid growth on the channel. Earlier I thought that even too would be amazing, but now we have exactly 8 times as many subscribers as one year ago. Words fail me to describe the joy of showing these amazing works to such a rapidly growing audience. This is why I always say at the end of every episode, thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolene Ifehir."}, {"start": 4.32, "end": 10.040000000000001, "text": " In this piece of work, we are interested in creating a 3D virtual tour for an apartment."}, {"start": 10.040000000000001, "end": 14.16, "text": " However, for this apartment, no 3D information is available."}, {"start": 14.16, "end": 19.12, "text": " Instead, the input for the algorithm is something that we can obtain easily."}, {"start": 19.12, "end": 24.32, "text": " In this case, a 2D floor plan and a set of images that we shot in the apartment."}, {"start": 24.32, "end": 29.12, "text": " From this information, we would create a 3D floor plan that is not only faithful to the"}, {"start": 29.12, "end": 35.120000000000005, "text": " real one in terms of geometry, but the photos with the correct viewpoints are also to be assigned"}, {"start": 35.120000000000005, "end": 36.72, "text": " to the correct walls."}, {"start": 36.72, "end": 41.68, "text": " In order to accomplish this, one has to overcome a series of challenging problems."}, {"start": 41.68, "end": 46.24, "text": " For instance, we have to estimate the layout of each room and find the location of the"}, {"start": 46.24, "end": 48.480000000000004, "text": " camera in each of these images."}, {"start": 48.480000000000004, "end": 53.68000000000001, "text": " Also, to obtain high quality solutions, the goal is to extract as much information from"}, {"start": 53.68000000000001, "end": 55.44, "text": " the inputs as possible."}, {"start": 55.44, "end": 60.0, "text": " The authors recognized that the floor plans provide way more information than we take"}, {"start": 60.0, "end": 61.16, "text": " for granted."}, {"start": 61.16, "end": 66.0, "text": " For instance, beyond showing the geometric relation of the rooms, it can also be used to"}, {"start": 66.0, "end": 70.16, "text": " find out the aspect ratios of the floor for each room."}, {"start": 70.16, "end": 75.52, "text": " The window tour ratios can also be approximated and matched between the photos and the floor"}, {"start": 75.52, "end": 76.52, "text": " plan."}, {"start": 76.52, "end": 80.96, "text": " This additional information is super useful when trying to find out which room is to be"}, {"start": 80.96, "end": 84.47999999999999, "text": " assigned to which part of the 3D floor plan."}, {"start": 84.48, "end": 89.60000000000001, "text": " On just looking at the photos, we also have access to a large swath of learning algorithms"}, {"start": 89.60000000000001, "end": 94.44, "text": " that can reliably classify whether we are looking at a bathroom or a living room."}, {"start": 94.44, "end": 99.04, "text": " There are even more constraints to adhere to in order to aggressively reduce the number"}, {"start": 99.04, "end": 100.92, "text": " of physical configurations."}, {"start": 100.92, "end": 103.24000000000001, "text": " Make sure to have a look at the paper for details."}, {"start": 103.24000000000001, "end": 105.52000000000001, "text": " There are lots of cool tricks described there."}, {"start": 105.52000000000001, "end": 108.2, "text": " As always, there is a link to it in the video description."}, {"start": 108.2, "end": 113.84, "text": " For instance, since the space of possible solutions is still too vast, a branch and bound"}, {"start": 113.84, "end": 119.28, "text": " type algorithm is proposed to further decimate the number of potential solutions to evaluate."}, {"start": 119.28, "end": 124.2, "text": " And as you can see here, the comparisons against ground truth floor plans reveal that these"}, {"start": 124.2, "end": 127.60000000000001, "text": " solutions are indeed quite faithful to reality."}, {"start": 127.60000000000001, "end": 133.72, "text": " The authors also kindly provided a dataset with more than 200 full apartments with well"}, {"start": 133.72, "end": 139.24, "text": " over a thousand photos and annotations for future use in follow-up research works."}, {"start": 139.24, "end": 143.4, "text": " Creating such a dataset and publishing it is incredibly laborious and could easily"}, {"start": 143.4, "end": 148.12, "text": " be a paper on its own, and here we also get an excellent solution for this problem as"}, {"start": 148.12, "end": 149.12, "text": " well."}, {"start": 149.12, "end": 153.72, "text": " In a separate work, the authors also published a different version of this problem formulation"}, {"start": 153.72, "end": 157.92000000000002, "text": " that reconstructs the exterior of buildings in a similar manner."}, {"start": 157.92000000000002, "end": 159.84, "text": " There is so much to explore."}, {"start": 159.84, "end": 162.08, "text": " The links are available in the video description."}, {"start": 162.08, "end": 163.08, "text": " Make sure to have a look."}, {"start": 163.08, "end": 166.72, "text": " In case you're wondering, it's still considered a crime not doing that."}, {"start": 166.72, "end": 171.92000000000002, "text": " I hope you have enjoyed this episode and I find it so delightful to see this unbelievably"}, {"start": 171.92, "end": 173.92, "text": " rapid growth on the channel."}, {"start": 173.92, "end": 179.72, "text": " Earlier I thought that even too would be amazing, but now we have exactly 8 times as many"}, {"start": 179.72, "end": 182.23999999999998, "text": " subscribers as one year ago."}, {"start": 182.23999999999998, "end": 187.64, "text": " Words fail me to describe the joy of showing these amazing works to such a rapidly growing"}, {"start": 187.64, "end": 188.64, "text": " audience."}, {"start": 188.64, "end": 193.0, "text": " This is why I always say at the end of every episode, thanks for watching and for your"}, {"start": 193.0, "end": 211.04, "text": " generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wz9cUncBdxw
Geometric Detail Transfer | Two Minute Papers #141
The paper "Learning Detail Transfer based on Geometric Features" is available here: http://www.chongyangma.com/publications/ld/index.html The story of our recent software and hardware overhaul: https://www.patreon.com/posts/software-and-8622149 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Christian Lawson, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1853203/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejone Fehr. In the world of Digital 3D modeling, it often occurs that we are looking for surfaces that are not perfectly smooth but have some sort of surface detail. Wrinkles and gravings, grain on a wooden table are excellent examples of details that we can add to our models and computer graphics people like to collectively call these things displacement maps. Artists often encounter cases where they like the displacements on one object but the object itself is not really interesting. However, it could be that there is a different piece of geometry these details would look great on. Consider this problem solved because in this piece of work, the input is two 3D models. One, with interesting geometric details and the other is the model onto which we transfer these surface details. The output will be our 3D geometric shape with two of these models fused together. The results look absolutely amazing. I would love to use this right away in several projects. The first key part is the usage of metric learning. Wait, technical term. What does this mean exactly? Metric learning is a classical technique in the field of machine learning where we are trying to learn distances between things where distance is mathematically ill-defined. Let's make it even simpler and go with an example. For instance, we have a database of human faces and we would like to search for faces that are similar to a given input. To do this, we specify a few distances by hand. For instance, we could say that a person with a beard is a short distance from one with a mustache and the larger distance from one with no facial hair. If we hand many examples of these distances to a learning algorithm, it will be able to find people with similar beards. And in this work, this metric learning is used to learn the relationship between objects with and without these rich surface details. This helps in the transferring process. As to creating the new displacements on the new model, there are several hurdles to overcome. One, we cannot just grab the displacements and shove them onto a different model because it can potentially look different, have different curvatures and sizes. The solution to this would be capturing the statistical properties of the surface details and use this information to synthesize new ones on the target model. Note that we cannot just perform this texture synthesis in 2D like we do for images because as we project the result to a 3D model, it introduces severe distortions to the displacement patterns. It is a bit like putting a rubber blanket onto a complicated object. Different regions of the blanket will be distorted differently. Make sure to have a look at the paper where the authors present quite a few more results and of course the intricacies of this technique are also described in detail. I hope some public implementations of this method will appear soon. I would be quite excited to use this right away and I am sure there are many artists who would love to create these wonderfully detailed models for the animated films and computer games of the future. In the meantime, we have a completely overhauled software and hardware pipeline to create these videos. We have written down our joyful and perilous story of it on Patreon. If you are interested in looking a bit behind the curtain as to how these episodes are made, make sure to have a look it is available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejone Fehr."}, {"start": 4.5200000000000005, "end": 10.08, "text": " In the world of Digital 3D modeling, it often occurs that we are looking for surfaces that"}, {"start": 10.08, "end": 14.200000000000001, "text": " are not perfectly smooth but have some sort of surface detail."}, {"start": 14.200000000000001, "end": 20.080000000000002, "text": " Wrinkles and gravings, grain on a wooden table are excellent examples of details that we"}, {"start": 20.080000000000002, "end": 25.16, "text": " can add to our models and computer graphics people like to collectively call these things"}, {"start": 25.16, "end": 26.68, "text": " displacement maps."}, {"start": 26.68, "end": 31.52, "text": " Artists often encounter cases where they like the displacements on one object but the"}, {"start": 31.52, "end": 34.44, "text": " object itself is not really interesting."}, {"start": 34.44, "end": 38.96, "text": " However, it could be that there is a different piece of geometry these details would look"}, {"start": 38.96, "end": 40.36, "text": " great on."}, {"start": 40.36, "end": 45.879999999999995, "text": " Consider this problem solved because in this piece of work, the input is two 3D models."}, {"start": 45.879999999999995, "end": 51.480000000000004, "text": " One, with interesting geometric details and the other is the model onto which we transfer"}, {"start": 51.480000000000004, "end": 53.2, "text": " these surface details."}, {"start": 53.2, "end": 58.64, "text": " The output will be our 3D geometric shape with two of these models fused together."}, {"start": 58.64, "end": 61.120000000000005, "text": " The results look absolutely amazing."}, {"start": 61.120000000000005, "end": 64.84, "text": " I would love to use this right away in several projects."}, {"start": 64.84, "end": 68.52000000000001, "text": " The first key part is the usage of metric learning."}, {"start": 68.52000000000001, "end": 70.16, "text": " Wait, technical term."}, {"start": 70.16, "end": 72.0, "text": " What does this mean exactly?"}, {"start": 72.0, "end": 76.2, "text": " Metric learning is a classical technique in the field of machine learning where we are"}, {"start": 76.2, "end": 81.44, "text": " trying to learn distances between things where distance is mathematically ill-defined."}, {"start": 81.44, "end": 84.88, "text": " Let's make it even simpler and go with an example."}, {"start": 84.88, "end": 89.96, "text": " For instance, we have a database of human faces and we would like to search for faces that"}, {"start": 89.96, "end": 91.88, "text": " are similar to a given input."}, {"start": 91.88, "end": 95.08, "text": " To do this, we specify a few distances by hand."}, {"start": 95.08, "end": 99.6, "text": " For instance, we could say that a person with a beard is a short distance from one with"}, {"start": 99.6, "end": 103.92, "text": " a mustache and the larger distance from one with no facial hair."}, {"start": 103.92, "end": 108.52, "text": " If we hand many examples of these distances to a learning algorithm, it will be able to"}, {"start": 108.52, "end": 111.24, "text": " find people with similar beards."}, {"start": 111.24, "end": 116.08, "text": " And in this work, this metric learning is used to learn the relationship between objects"}, {"start": 116.08, "end": 119.56, "text": " with and without these rich surface details."}, {"start": 119.56, "end": 121.88, "text": " This helps in the transferring process."}, {"start": 121.88, "end": 127.52, "text": " As to creating the new displacements on the new model, there are several hurdles to overcome."}, {"start": 127.52, "end": 131.84, "text": " One, we cannot just grab the displacements and shove them onto a different model because"}, {"start": 131.84, "end": 136.2, "text": " it can potentially look different, have different curvatures and sizes."}, {"start": 136.2, "end": 141.16, "text": " The solution to this would be capturing the statistical properties of the surface details"}, {"start": 141.16, "end": 145.28, "text": " and use this information to synthesize new ones on the target model."}, {"start": 145.28, "end": 150.88, "text": " Note that we cannot just perform this texture synthesis in 2D like we do for images because"}, {"start": 150.88, "end": 156.79999999999998, "text": " as we project the result to a 3D model, it introduces severe distortions to the displacement"}, {"start": 156.79999999999998, "end": 157.79999999999998, "text": " patterns."}, {"start": 157.79999999999998, "end": 162.07999999999998, "text": " It is a bit like putting a rubber blanket onto a complicated object."}, {"start": 162.07999999999998, "end": 165.23999999999998, "text": " Different regions of the blanket will be distorted differently."}, {"start": 165.24, "end": 169.28, "text": " Make sure to have a look at the paper where the authors present quite a few more results"}, {"start": 169.28, "end": 173.8, "text": " and of course the intricacies of this technique are also described in detail."}, {"start": 173.8, "end": 177.72, "text": " I hope some public implementations of this method will appear soon."}, {"start": 177.72, "end": 182.8, "text": " I would be quite excited to use this right away and I am sure there are many artists who"}, {"start": 182.8, "end": 187.60000000000002, "text": " would love to create these wonderfully detailed models for the animated films and computer"}, {"start": 187.60000000000002, "end": 189.12, "text": " games of the future."}, {"start": 189.12, "end": 194.04000000000002, "text": " In the meantime, we have a completely overhauled software and hardware pipeline to create"}, {"start": 194.04, "end": 195.23999999999998, "text": " these videos."}, {"start": 195.23999999999998, "end": 199.88, "text": " We have written down our joyful and perilous story of it on Patreon."}, {"start": 199.88, "end": 203.76, "text": " If you are interested in looking a bit behind the curtain as to how these episodes are"}, {"start": 203.76, "end": 207.48, "text": " made, make sure to have a look it is available in the video description."}, {"start": 207.48, "end": 227.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UEPbzj-ekAI
Modeling Knitted Clothing | Two Minute Papers #140
The paper "Stitch Meshes for Modeling Knitted Clothing with Yarn-level Detail" is available here: http://www.cs.cornell.edu/projects/stitchmeshes/ http://www.cemyuksel.com/research/stitchmeshes/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-2042186/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. Not so long ago, we talked about a technique that enabled us to render stunningly high-quality cloth models in real time. It supported level of detail, self-shadowing, and lots of other goodies that make computer game developers, and of course, my humble self, super happy. And today, we are going to talk about a technique that is able to create these highly detailed cloth geometries for our digital characters. I have really fond memories of attending to the talk of the Oscar Award winner Steve Martiner on this paper a few years ago in Switzerland, and I remember being so spellbound by it that I knew this was a day I will never forget. I am sure you'll love it too. In this piece of work, the goal is to create a digital garment model that is as detailed and realistic as possible. We start out with an input 3D geometry that shows the rough shape of the model, then we pick a knitting pattern of our choice. After that, the points of this knitting pattern are moved so that they correctly fit this 3D geometry that we specified. And now comes the coolest part. What we created so far is an ad hoc model that doesn't really look and behave like a real piece of cloth. To remedy this, a physics-based simulation is run that takes this ad hoc model and the output of this process will be a realistic rest shape for these yarn curves. And here you can witness how the simulated forces pull the entire piece of garment together. We start out with dreaming up a piece of cloth geometry and this simulator gradually transforms it into a real world version of that. This is a step that we call yarn level relaxation. Wow. These final results look not only magnificent, but in a physical simulation they also behave like real garments. It's such a joy to look at results like this, loving it. Again, I would like to note that we are not talking about the visualization of the garment but creating a realistic piece of geometry. The most obvious drawback of this technique is its computation time. It was run on a very expensive system and still took several hours of number crunching to get this done. However, I haven't seen an implementation of this on the graphics card yet, so if someone can come up with an efficient way to do it, in an ideal case we may be able to do this in several minutes. I also have to notify you about the fact that it is considered a crime not having a look at the paper in the video description. It does not suffice to say that it is well written. It is so brilliantly presented, it's truly a one-of-a-kind work that everyone has to see. If you enjoyed this episode, make sure to subscribe to Two Minute Papers with Be Happy to have you in our Growing Club of Fellow Scholars. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.48, "end": 9.92, "text": " Not so long ago, we talked about a technique that enabled us to render stunningly high-quality"}, {"start": 9.92, "end": 12.24, "text": " cloth models in real time."}, {"start": 12.24, "end": 17.1, "text": " It supported level of detail, self-shadowing, and lots of other goodies that make computer"}, {"start": 17.1, "end": 21.32, "text": " game developers, and of course, my humble self, super happy."}, {"start": 21.32, "end": 26.060000000000002, "text": " And today, we are going to talk about a technique that is able to create these highly detailed"}, {"start": 26.060000000000002, "end": 29.04, "text": " cloth geometries for our digital characters."}, {"start": 29.04, "end": 34.0, "text": " I have really fond memories of attending to the talk of the Oscar Award winner Steve"}, {"start": 34.0, "end": 39.44, "text": " Martiner on this paper a few years ago in Switzerland, and I remember being so spellbound"}, {"start": 39.44, "end": 43.0, "text": " by it that I knew this was a day I will never forget."}, {"start": 43.0, "end": 44.96, "text": " I am sure you'll love it too."}, {"start": 44.96, "end": 50.44, "text": " In this piece of work, the goal is to create a digital garment model that is as detailed"}, {"start": 50.44, "end": 52.480000000000004, "text": " and realistic as possible."}, {"start": 52.480000000000004, "end": 58.480000000000004, "text": " We start out with an input 3D geometry that shows the rough shape of the model, then we pick"}, {"start": 58.48, "end": 60.879999999999995, "text": " a knitting pattern of our choice."}, {"start": 60.879999999999995, "end": 65.96, "text": " After that, the points of this knitting pattern are moved so that they correctly fit this"}, {"start": 65.96, "end": 68.72, "text": " 3D geometry that we specified."}, {"start": 68.72, "end": 70.84, "text": " And now comes the coolest part."}, {"start": 70.84, "end": 76.52, "text": " What we created so far is an ad hoc model that doesn't really look and behave like a real"}, {"start": 76.52, "end": 77.92, "text": " piece of cloth."}, {"start": 77.92, "end": 83.4, "text": " To remedy this, a physics-based simulation is run that takes this ad hoc model and the"}, {"start": 83.4, "end": 88.52000000000001, "text": " output of this process will be a realistic rest shape for these yarn curves."}, {"start": 88.52000000000001, "end": 94.48, "text": " And here you can witness how the simulated forces pull the entire piece of garment together."}, {"start": 94.48, "end": 100.2, "text": " We start out with dreaming up a piece of cloth geometry and this simulator gradually transforms"}, {"start": 100.2, "end": 102.76, "text": " it into a real world version of that."}, {"start": 102.76, "end": 106.4, "text": " This is a step that we call yarn level relaxation."}, {"start": 106.4, "end": 108.04, "text": " Wow."}, {"start": 108.04, "end": 113.64, "text": " These final results look not only magnificent, but in a physical simulation they also behave"}, {"start": 113.64, "end": 115.24000000000001, "text": " like real garments."}, {"start": 115.24000000000001, "end": 118.60000000000001, "text": " It's such a joy to look at results like this, loving it."}, {"start": 118.60000000000001, "end": 123.12, "text": " Again, I would like to note that we are not talking about the visualization of the garment"}, {"start": 123.12, "end": 126.52000000000001, "text": " but creating a realistic piece of geometry."}, {"start": 126.52000000000001, "end": 130.36, "text": " The most obvious drawback of this technique is its computation time."}, {"start": 130.36, "end": 135.32, "text": " It was run on a very expensive system and still took several hours of number crunching"}, {"start": 135.32, "end": 136.56, "text": " to get this done."}, {"start": 136.56, "end": 141.24, "text": " However, I haven't seen an implementation of this on the graphics card yet, so if someone"}, {"start": 141.24, "end": 146.16, "text": " can come up with an efficient way to do it, in an ideal case we may be able to do this"}, {"start": 146.16, "end": 147.48, "text": " in several minutes."}, {"start": 147.48, "end": 152.48, "text": " I also have to notify you about the fact that it is considered a crime not having a look"}, {"start": 152.48, "end": 154.52, "text": " at the paper in the video description."}, {"start": 154.52, "end": 157.32, "text": " It does not suffice to say that it is well written."}, {"start": 157.32, "end": 162.8, "text": " It is so brilliantly presented, it's truly a one-of-a-kind work that everyone has to"}, {"start": 162.8, "end": 163.8, "text": " see."}, {"start": 163.8, "end": 168.60000000000002, "text": " If you enjoyed this episode, make sure to subscribe to Two Minute Papers with Be Happy to have"}, {"start": 168.60000000000002, "end": 171.12, "text": " you in our Growing Club of Fellow Scholars."}, {"start": 171.12, "end": 193.68, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=n3aoc36V8LM
Structural Image Editing With PatchMatch | Two Minute Papers #139
The paper "PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing" and part of its source code is available here: http://gfx.cs.princeton.edu/gfx/pubs/Barnes_2009_PAR/index.php Additional, unofficial implementations: https://github.com/ikuwow/PatchMatch https://github.com/rcrandall/PatchMatch WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image source: https://pixabay.com/photo-1594689/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Kato Ejo and Afehir. We are currently more than 130 episodes into the series and we still haven't talked about this algorithm. How could we go on for so long without patch match? So let's do this right now. You'll love this one. This technique helps us to make absolutely crazy modifications to previously existing photographs and it is one of the landmark papers for all kinds of photo manipulation which is still widely used to this day. Consider the following workflow. We have this image as an input. Let's mark the roof line for hole filling or image in painting as the literature refers to it and the hole is now filled with quite sensible information. Now we mark some of the pillars to reshape the object and then we pull the roof upward. The output is a completely redesigned version of the input photograph. Wow! Absolutely incredible. And the whole thing happens interactively, almost in real time. But if we consider the hardware improvement since this paper was published, it is safe to say that today it runs in real time even on a mediocre computer. And in this piece of work, the image completion part works by adding additional hints to the algorithm. For instance, marking the expected shape of an object that we wish to cut out and have it filled with new data. Moreover, showing the shape of the building that we wish to edit also helps the technique considerably. These results are so stunning. I remember when I had first seen them. I had to recheck over and over again because I could hardly believe my eyes. This technique offers not only these high quality results but it is considerably quicker than its competitors. To accomplish image in painting, most algorithms look for regions in the image that are similar to the one that is being removed and borrow some information from there for the filling process. Here, one of the key ideas that speed up the process is that when good correspondences are found, if we are doing another look-up, we shouldn't restart the patch matching process, but we should try to search nearby because that's where we are most likely to find useful information. You know what the best part is? What you see here is just the start. This technique does not use any of the modern machine learning techniques, so in the era of these incredibly powerful deep neural networks, I can only imagine the quality of solutions will be able to obtain in the near future. We are living amazing times indeed. Thanks for watching and for your generous support. Bye.
[{"start": 0.0, "end": 4.92, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Kato Ejo and Afehir."}, {"start": 4.92, "end": 11.08, "text": " We are currently more than 130 episodes into the series and we still haven't talked about"}, {"start": 11.08, "end": 12.280000000000001, "text": " this algorithm."}, {"start": 12.280000000000001, "end": 15.32, "text": " How could we go on for so long without patch match?"}, {"start": 15.32, "end": 17.12, "text": " So let's do this right now."}, {"start": 17.12, "end": 18.32, "text": " You'll love this one."}, {"start": 18.32, "end": 23.48, "text": " This technique helps us to make absolutely crazy modifications to previously existing"}, {"start": 23.48, "end": 28.6, "text": " photographs and it is one of the landmark papers for all kinds of photo manipulation which"}, {"start": 28.6, "end": 31.080000000000002, "text": " is still widely used to this day."}, {"start": 31.080000000000002, "end": 32.52, "text": " Consider the following workflow."}, {"start": 32.52, "end": 34.72, "text": " We have this image as an input."}, {"start": 34.72, "end": 39.68, "text": " Let's mark the roof line for hole filling or image in painting as the literature refers"}, {"start": 39.68, "end": 44.44, "text": " to it and the hole is now filled with quite sensible information."}, {"start": 44.44, "end": 51.120000000000005, "text": " Now we mark some of the pillars to reshape the object and then we pull the roof upward."}, {"start": 51.120000000000005, "end": 55.400000000000006, "text": " The output is a completely redesigned version of the input photograph."}, {"start": 55.400000000000006, "end": 56.400000000000006, "text": " Wow!"}, {"start": 56.400000000000006, "end": 58.36, "text": " Absolutely incredible."}, {"start": 58.36, "end": 62.24, "text": " And the whole thing happens interactively, almost in real time."}, {"start": 62.24, "end": 66.4, "text": " But if we consider the hardware improvement since this paper was published, it is safe"}, {"start": 66.4, "end": 71.24, "text": " to say that today it runs in real time even on a mediocre computer."}, {"start": 71.24, "end": 76.4, "text": " And in this piece of work, the image completion part works by adding additional hints to"}, {"start": 76.4, "end": 77.6, "text": " the algorithm."}, {"start": 77.6, "end": 82.32, "text": " For instance, marking the expected shape of an object that we wish to cut out and have"}, {"start": 82.32, "end": 83.92, "text": " it filled with new data."}, {"start": 83.92, "end": 88.64, "text": " Moreover, showing the shape of the building that we wish to edit also helps the technique"}, {"start": 88.64, "end": 89.64, "text": " considerably."}, {"start": 89.64, "end": 91.48, "text": " These results are so stunning."}, {"start": 91.48, "end": 93.72, "text": " I remember when I had first seen them."}, {"start": 93.72, "end": 98.56, "text": " I had to recheck over and over again because I could hardly believe my eyes."}, {"start": 98.56, "end": 103.72, "text": " This technique offers not only these high quality results but it is considerably quicker"}, {"start": 103.72, "end": 105.28, "text": " than its competitors."}, {"start": 105.28, "end": 110.6, "text": " To accomplish image in painting, most algorithms look for regions in the image that are similar"}, {"start": 110.6, "end": 115.22, "text": " to the one that is being removed and borrow some information from there for the filling"}, {"start": 115.22, "end": 116.22, "text": " process."}, {"start": 116.22, "end": 121.03999999999999, "text": " Here, one of the key ideas that speed up the process is that when good correspondences"}, {"start": 121.03999999999999, "end": 126.56, "text": " are found, if we are doing another look-up, we shouldn't restart the patch matching process,"}, {"start": 126.56, "end": 131.12, "text": " but we should try to search nearby because that's where we are most likely to find useful"}, {"start": 131.12, "end": 132.12, "text": " information."}, {"start": 132.12, "end": 133.88, "text": " You know what the best part is?"}, {"start": 133.88, "end": 136.35999999999999, "text": " What you see here is just the start."}, {"start": 136.36, "end": 141.16000000000003, "text": " This technique does not use any of the modern machine learning techniques, so in the era"}, {"start": 141.16000000000003, "end": 146.84, "text": " of these incredibly powerful deep neural networks, I can only imagine the quality of solutions"}, {"start": 146.84, "end": 149.36, "text": " will be able to obtain in the near future."}, {"start": 149.36, "end": 151.72000000000003, "text": " We are living amazing times indeed."}, {"start": 151.72, "end": 168.8, "text": " Thanks for watching and for your generous support."}, {"start": 168.8, "end": 190.4, "text": " Bye."}]
Two Minute Papers
https://www.youtube.com/watch?v=bB54Wz4kq0E
Shape2vec: Understanding 3D Shapes With AI | Two Minute Papers #138
The paper "Shape2Vec: semantic-based descriptors for 3D shapes, sketches and images" is available here: http://www.cl.cam.ac.uk/research/rainbow/projects/shape2vec/ Code (coming soon according to the authors): https://github.com/ftasse/Shape2Vec WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1828007/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karo Ejonei Fehere. This one is going to be absolutely amazing. This piece of work is aimed to help a machine build a better understanding of images and 3D geometry. Imagine that we have a large database with these geometries and images, and we can search and compare them with arbitrary inputs and outputs. What does this mean exactly? For instance, it can handle a text input such as school bus and automatically retrieve 3D models, sketches and images that depict these kinds of objects. This is great, but we said that it supports arbitrary inputs and outputs, which means that we can use the 3D geometry of a chair as an input and obtain other similar looking chairs from the database. This technique is so crazy it can even take a sketch as an input and provide excellent quality outputs. We can even give it a heat map of the input and expect quite reasonable results. Typically, these images and 3D geometries contain a lot of information, and to be able to compare which is similar to which we have to compress this information into a more concise description. This description offers a common ground for comparisons. We like to call these embedding techniques. Here, you can see an example of a 2D visualization of such an embedding of word classes. The retrieval from the database happens by compressing the user provided input and putting it into this space and fetching the results that are the closest to it in this embedding. Before the emergence of powerful learning algorithms, these embeddings were typically done by hand. But now, we have these deep neural networks that are able to automatically create solutions for us that are in some sense optimal, meaning that according to a set of rules, it will always do better than we would by hand. We get better results by going to sleep and leaving the computer on overnight, then we would have working all night using the finest algorithms from 10 years ago. Isn't this incredible? The interesting thing is that here, we are able to do this for several different representations. For instance, a piece of 3D geometry or 2D color image is being embedded into the very same vector space opening up the possibility of doing these amazing comparisons between completely different representations. The results speak for themselves. This is another great testament to the power of convolution on your own networks and as you can see, the rate of progress in AI and machine learning research is absolutely stunning. Also, big thumbs up for the observant fellow scholars out there who noticed the new outro music and some other minor changes in the series. If you are among those people, you can consider yourself a hardcore 2 minute paper scholar. High five! Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two minute papers with Karo Ejonei Fehere."}, {"start": 4.46, "end": 7.72, "text": " This one is going to be absolutely amazing."}, {"start": 7.72, "end": 14.92, "text": " This piece of work is aimed to help a machine build a better understanding of images and 3D geometry."}, {"start": 14.92, "end": 18.86, "text": " Imagine that we have a large database with these geometries and images,"}, {"start": 18.86, "end": 23.76, "text": " and we can search and compare them with arbitrary inputs and outputs."}, {"start": 23.76, "end": 25.46, "text": " What does this mean exactly?"}, {"start": 25.46, "end": 32.72, "text": " For instance, it can handle a text input such as school bus and automatically retrieve 3D models,"}, {"start": 32.72, "end": 36.86, "text": " sketches and images that depict these kinds of objects."}, {"start": 36.86, "end": 41.42, "text": " This is great, but we said that it supports arbitrary inputs and outputs,"}, {"start": 41.42, "end": 50.02, "text": " which means that we can use the 3D geometry of a chair as an input and obtain other similar looking chairs from the database."}, {"start": 50.02, "end": 57.02, "text": " This technique is so crazy it can even take a sketch as an input and provide excellent quality outputs."}, {"start": 57.02, "end": 61.900000000000006, "text": " We can even give it a heat map of the input and expect quite reasonable results."}, {"start": 61.900000000000006, "end": 66.7, "text": " Typically, these images and 3D geometries contain a lot of information,"}, {"start": 66.7, "end": 73.7, "text": " and to be able to compare which is similar to which we have to compress this information into a more concise description."}, {"start": 73.7, "end": 77.46000000000001, "text": " This description offers a common ground for comparisons."}, {"start": 77.46, "end": 80.25999999999999, "text": " We like to call these embedding techniques."}, {"start": 80.25999999999999, "end": 86.33999999999999, "text": " Here, you can see an example of a 2D visualization of such an embedding of word classes."}, {"start": 86.33999999999999, "end": 91.02, "text": " The retrieval from the database happens by compressing the user provided input"}, {"start": 91.02, "end": 96.78, "text": " and putting it into this space and fetching the results that are the closest to it in this embedding."}, {"start": 96.78, "end": 102.53999999999999, "text": " Before the emergence of powerful learning algorithms, these embeddings were typically done by hand."}, {"start": 102.54, "end": 108.18, "text": " But now, we have these deep neural networks that are able to automatically create solutions for us"}, {"start": 108.18, "end": 112.86000000000001, "text": " that are in some sense optimal, meaning that according to a set of rules,"}, {"start": 112.86000000000001, "end": 115.58000000000001, "text": " it will always do better than we would by hand."}, {"start": 115.58000000000001, "end": 120.62, "text": " We get better results by going to sleep and leaving the computer on overnight,"}, {"start": 120.62, "end": 126.02000000000001, "text": " then we would have working all night using the finest algorithms from 10 years ago."}, {"start": 126.02000000000001, "end": 127.5, "text": " Isn't this incredible?"}, {"start": 127.5, "end": 133.18, "text": " The interesting thing is that here, we are able to do this for several different representations."}, {"start": 133.18, "end": 140.46, "text": " For instance, a piece of 3D geometry or 2D color image is being embedded into the very same vector space"}, {"start": 140.46, "end": 146.78, "text": " opening up the possibility of doing these amazing comparisons between completely different representations."}, {"start": 146.78, "end": 149.1, "text": " The results speak for themselves."}, {"start": 149.1, "end": 153.42000000000002, "text": " This is another great testament to the power of convolution on your own networks"}, {"start": 153.42, "end": 159.98, "text": " and as you can see, the rate of progress in AI and machine learning research is absolutely stunning."}, {"start": 159.98, "end": 165.89999999999998, "text": " Also, big thumbs up for the observant fellow scholars out there who noticed the new outro music"}, {"start": 165.89999999999998, "end": 168.85999999999999, "text": " and some other minor changes in the series."}, {"start": 168.85999999999999, "end": 174.22, "text": " If you are among those people, you can consider yourself a hardcore 2 minute paper scholar."}, {"start": 174.22, "end": 175.17999999999998, "text": " High five!"}, {"start": 175.17999999999998, "end": 177.5, "text": " Thanks for watching and for your generous support."}, {"start": 177.5, "end": 193.42, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YWK-bnyXvbg
Space-Time Video Completion | Two Minute Papers #137
The paper "Space-Time Video Completion" is available here: http://www.wisdom.weizmann.ac.il/~vision/VideoCompletion.html Unofficial implementation: http://www2.mta.ac.il/~tal/ImageCompletion/ Disclaimer: as the website mentioned the source code, I incorrectly assumed that it also contains that. Unfortunately, this is not the case. Please have a look at this followup paper with source code, hopefully this will be of help - http://perso.telecom-paristech.fr/~gousseau/video_inpainting/ In the meantime, if you find an implementation of this technique, please let me know and I'll add a link to it here. WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-790220/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Ejona Efehir. Today, we are going to talk about an algorithm that is capable of filling holes in space and time. Classical image in painting or in other words, filling holes in images is something that we've explored in earlier episodes. There are several incredible techniques to take care of that. But with this piece of work, it is possible to generalize such a technique for video and fill holes for not only one image, but a series of images, like removing a nambraela to create an unaccluded view to the beach, or removing a waving person from a video of us jogging. These results are truly incredible, and even though this method was published long, long ago, it still enjoys a great deal of reverence among computer graphics practitioners. Not only that, but this algorithm also serves the basis of the awesome content to where field feature in Adobe Photoshop CS5. In this problem formulation, we have to make sure that our solution has spatio-temporal consistency. What does this mean exactly? This means that holes can exist through space and time, so multiple frames of a video may be missing, or there may be regions that we wish to cut out not only for one image, but for the entirety of the video. The field in regions have to be consistent with their surroundings if they are looked at as an image, but there also has to be a consistency across the time domain, otherwise we would see a disturbing flickering effect in the results. It is a really challenging problem indeed, because there are more constraints that we have to adhere to. However, a key observation is that it is also easier, because in return, we have access to more information that comes from the previous and next frames in the video. For instance, here you can see an example of retouching old footage by removing a huge pesky artifact. And clearly, we know the fact that Charlie Chaplin is supposed to be in the middle of the image, only because we have this information from the previous and next frames. All this is achieved by an optimization algorithm that takes into consideration that consistency has to be enforced through the spatial and the time domain at the same time. It can also be used to fill in completely missing frames of a video, or it also helps where we have parts of an image missing after being removed by an image stabilizer algorithm. Video editors do this all the time, so such a restoration technique is super useful. The source code of this technique is available, I've put a link to it in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Ejona Efehir."}, {"start": 4.64, "end": 12.0, "text": " Today, we are going to talk about an algorithm that is capable of filling holes in space and time."}, {"start": 12.0, "end": 19.68, "text": " Classical image in painting or in other words, filling holes in images is something that we've explored in earlier episodes."}, {"start": 19.68, "end": 22.88, "text": " There are several incredible techniques to take care of that."}, {"start": 22.88, "end": 27.84, "text": " But with this piece of work, it is possible to generalize such a technique for video"}, {"start": 27.84, "end": 32.24, "text": " and fill holes for not only one image, but a series of images,"}, {"start": 32.24, "end": 36.32, "text": " like removing a nambraela to create an unaccluded view to the beach,"}, {"start": 36.32, "end": 40.08, "text": " or removing a waving person from a video of us jogging."}, {"start": 40.08, "end": 46.0, "text": " These results are truly incredible, and even though this method was published long, long ago,"}, {"start": 46.0, "end": 50.8, "text": " it still enjoys a great deal of reverence among computer graphics practitioners."}, {"start": 50.8, "end": 56.56, "text": " Not only that, but this algorithm also serves the basis of the awesome content to where"}, {"start": 56.56, "end": 59.84, "text": " field feature in Adobe Photoshop CS5."}, {"start": 59.84, "end": 66.0, "text": " In this problem formulation, we have to make sure that our solution has spatio-temporal consistency."}, {"start": 66.0, "end": 67.60000000000001, "text": " What does this mean exactly?"}, {"start": 67.60000000000001, "end": 71.36, "text": " This means that holes can exist through space and time,"}, {"start": 71.36, "end": 76.72, "text": " so multiple frames of a video may be missing, or there may be regions that we wish to cut out"}, {"start": 76.72, "end": 80.32000000000001, "text": " not only for one image, but for the entirety of the video."}, {"start": 80.32000000000001, "end": 85.36, "text": " The field in regions have to be consistent with their surroundings if they are looked at as an image,"}, {"start": 85.36, "end": 90.24, "text": " but there also has to be a consistency across the time domain, otherwise we would see a"}, {"start": 90.24, "end": 92.64, "text": " disturbing flickering effect in the results."}, {"start": 92.64, "end": 97.03999999999999, "text": " It is a really challenging problem indeed, because there are more constraints that we have to"}, {"start": 97.03999999999999, "end": 103.2, "text": " adhere to. However, a key observation is that it is also easier, because in return,"}, {"start": 103.2, "end": 108.56, "text": " we have access to more information that comes from the previous and next frames in the video."}, {"start": 108.56, "end": 114.24, "text": " For instance, here you can see an example of retouching old footage by removing a huge"}, {"start": 114.24, "end": 119.91999999999999, "text": " pesky artifact. And clearly, we know the fact that Charlie Chaplin is supposed to be in the middle"}, {"start": 119.91999999999999, "end": 124.64, "text": " of the image, only because we have this information from the previous and next frames."}, {"start": 124.64, "end": 130.32, "text": " All this is achieved by an optimization algorithm that takes into consideration that consistency"}, {"start": 130.32, "end": 135.35999999999999, "text": " has to be enforced through the spatial and the time domain at the same time."}, {"start": 135.35999999999999, "end": 141.84, "text": " It can also be used to fill in completely missing frames of a video, or it also helps where we have"}, {"start": 141.84, "end": 147.76, "text": " parts of an image missing after being removed by an image stabilizer algorithm. Video editors do"}, {"start": 147.76, "end": 153.6, "text": " this all the time, so such a restoration technique is super useful. The source code of this technique"}, {"start": 153.6, "end": 158.08, "text": " is available, I've put a link to it in the video description. Thanks for watching and for your"}, {"start": 158.08, "end": 176.0, "text": " generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8u3Hkbev2Gg
Stable Neural Style Transfer | Two Minute Papers #136
The paper "Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses" is available here: https://arxiv.org/abs/1701.08893 Texture synthesis survey: http://www-sop.inria.fr/reves/Basilic/2009/WLKT09/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Andrew Melnychuk, Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1138294/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. Neural style transfer is an incredible technique where we have two input photographs, and the output would be a combination of these two, namely the content of one and the artistic style of the other fused together. When the first paper appeared on this topic, the news took the world by storm, and lots of speculative discussions emerged, as to what this could be used for, and how it would change digital arts and the video game industry. It is great fun to use these algorithms, and we have also witnessed a recent proliferation of phone apps that are able to accomplish this, which is super cool because of two reasons. One, the amount of time to go from a published research paper to industry-wide application has never been so small, and two, the first work required a powerful computer to accomplish this, and took several minutes of strenuous computation, and now, less than two years later, it's right in your pocket and can be done instantly. Talk about exponential progress in science and research. Absolutely amazing. And now, while we feast our eyes upon these beautiful results, let's talk about the selling points of this extension of the original technique. The paper contains a nice formal explanation of the weak points of the existing style transfer algorithms. The intuition behind the explanation is that the neural networks think in terms of neuron activations, which may not be proportional to the calorie intensities in the source image styles, therefore their behavior often becomes inconsistent or different than expected. The authors propose thinking in terms of histograms, which means that the output image should rely on statistical similarities with the source images. And as we can see, the results look outstanding, even when compared to the original method. It is also important to point out that this proposed technique is also more arc-directible. Make sure to have a look at the paper for more details on that. As always, I've put a link in the video description. This extension is also capable of texture synthesis, which means that we give it a small image patch that shows some sort of repetition, and it tries to continue it indefinitely, in a way that seems completely seamless. However, we have to be acutely aware of the fact that in the computer graphics community, texture synthesis is considered a subfield of its own with hundreds of papers, and one has to be extremely sure to have a clear cut selling point over the scale of the art. For the more interested fellow scholars out there, I've put a survey paper on this in the video description. Make sure to have a look. Thanks for watching and for your generous support. That's CU, next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.6000000000000005, "end": 12.88, "text": " Neural style transfer is an incredible technique where we have two input photographs, and the output would be a combination of these two,"}, {"start": 12.88, "end": 18.64, "text": " namely the content of one and the artistic style of the other fused together."}, {"start": 18.64, "end": 25.84, "text": " When the first paper appeared on this topic, the news took the world by storm, and lots of speculative discussions emerged,"}, {"start": 25.84, "end": 31.52, "text": " as to what this could be used for, and how it would change digital arts and the video game industry."}, {"start": 31.52, "end": 37.519999999999996, "text": " It is great fun to use these algorithms, and we have also witnessed a recent proliferation of phone apps"}, {"start": 37.519999999999996, "end": 42.16, "text": " that are able to accomplish this, which is super cool because of two reasons."}, {"start": 42.16, "end": 49.84, "text": " One, the amount of time to go from a published research paper to industry-wide application has never been so small,"}, {"start": 49.84, "end": 57.68000000000001, "text": " and two, the first work required a powerful computer to accomplish this, and took several minutes of strenuous computation,"}, {"start": 57.68000000000001, "end": 63.92, "text": " and now, less than two years later, it's right in your pocket and can be done instantly."}, {"start": 63.92, "end": 67.44, "text": " Talk about exponential progress in science and research."}, {"start": 67.44, "end": 69.12, "text": " Absolutely amazing."}, {"start": 69.12, "end": 77.60000000000001, "text": " And now, while we feast our eyes upon these beautiful results, let's talk about the selling points of this extension of the original technique."}, {"start": 77.6, "end": 83.75999999999999, "text": " The paper contains a nice formal explanation of the weak points of the existing style transfer algorithms."}, {"start": 83.75999999999999, "end": 90.16, "text": " The intuition behind the explanation is that the neural networks think in terms of neuron activations,"}, {"start": 90.16, "end": 95.19999999999999, "text": " which may not be proportional to the calorie intensities in the source image styles,"}, {"start": 95.19999999999999, "end": 100.32, "text": " therefore their behavior often becomes inconsistent or different than expected."}, {"start": 100.32, "end": 103.6, "text": " The authors propose thinking in terms of histograms,"}, {"start": 103.6, "end": 109.36, "text": " which means that the output image should rely on statistical similarities with the source images."}, {"start": 109.36, "end": 115.11999999999999, "text": " And as we can see, the results look outstanding, even when compared to the original method."}, {"start": 115.11999999999999, "end": 120.39999999999999, "text": " It is also important to point out that this proposed technique is also more arc-directible."}, {"start": 120.39999999999999, "end": 123.28, "text": " Make sure to have a look at the paper for more details on that."}, {"start": 123.28, "end": 126.08, "text": " As always, I've put a link in the video description."}, {"start": 126.08, "end": 129.6, "text": " This extension is also capable of texture synthesis,"}, {"start": 129.6, "end": 134.72, "text": " which means that we give it a small image patch that shows some sort of repetition,"}, {"start": 134.72, "end": 140.48, "text": " and it tries to continue it indefinitely, in a way that seems completely seamless."}, {"start": 140.48, "end": 145.51999999999998, "text": " However, we have to be acutely aware of the fact that in the computer graphics community,"}, {"start": 145.51999999999998, "end": 150.32, "text": " texture synthesis is considered a subfield of its own with hundreds of papers,"}, {"start": 150.32, "end": 155.35999999999999, "text": " and one has to be extremely sure to have a clear cut selling point over the scale of the art."}, {"start": 155.36, "end": 159.92000000000002, "text": " For the more interested fellow scholars out there, I've put a survey paper on this in the video"}, {"start": 159.92000000000002, "end": 164.08, "text": " description. Make sure to have a look. Thanks for watching and for your generous support."}, {"start": 164.08, "end": 186.0, "text": " That's CU, next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QFu0vZgMcqk
Breaking DeepMind's Game AI System | Two Minute Papers #135
Our Patreon page is available here: https://www.patreon.com/TwoMinutePapers The paper "Adversarial Attacks on Neural Network Policies" is available here: http://rll.berkeley.edu/adversarial/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim, VR Wizard. https://www.patreon.com/TwoMinutePapers Recommended for you: Breaking Deep Learning Systems With Adversarial Examples - https://www.youtube.com/watch?v=j9FLOinaG94 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1837125/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ijola Ifehir. Not so long ago, Google DeepMind introduced a novel learning algorithm that was able to reach superhuman levels in playing many Atari games. It was a spectacular milestone in AI research. Interestingly, while these learning algorithms are being improved at the staggering pace, there's a parallel subfield where researchers endeavor to break these learning systems by slightly changing the information they are presented with. Fraudulent pampering with images or video feeds, if you will. Imagine a system that is designed to identify what is seen in an image. In an earlier episode, we discussed an adversarial algorithm where, in an amusing example, they added a tiny bit of barely perceptible noise to this image to make the deep neural network misidentify a bus for an ostrich. Machine learning researchers like to call these evil forged images adversarial samples. And now, this time around, OpenAI published a super fun piece of work to fool these game learning algorithms by changing some of their input visual information. As you will see in a moment, it is so effective that by only using a tiny bit of information, it can turn a powerful learning algorithm into a blabbering idiot. The first method adds a tiny bit of noise to a large portion of the video input, where the difference is barely perceptible, but it forces the learning algorithm to choose a different action that it would have chosen otherwise. In the other one, a different modification was used that has a smaller footprint, for instance in Pong, adding a tiny fake ball to the game to coerce the learner into going down when it was originally planning to go up. The algorithm is able to learn game specific knowledge for almost any other game to fool the player. Despite the huge difference in the results, I love the elegant mathematical formulation of the two noise types, because despite the fact that they do something radically different, their mathematical formulation is quite similar. Mathematicians like to say that we are solving the same problem while optimizing for different target norms. One deep mind's deep-queue learning, two other high-quality learning algorithms are also fooled by this technique. In the white box formulation, we have access to the inner workings of the algorithm. But interestingly, a black box formulation is also proposed, where we know much less about the target system, but we know the game itself and we train our own system and look for weaknesses in that. When we found these weak points, we used this knowledge to break other systems. I can only imagine how much fun there was to be had for the authors when they were developing these techniques. Super excited to see how these arms raised of creating more powerful learning algorithms and in response, more powerful adversarial techniques to break them develops. In the future, I feel that the robustness of a learning algorithm, or in other words, its resilience against adversarial attacks will be just as important of a design factor as how powerful it is. There are a ton of videos published on the authors' website, make sure to have a look. And also, if you wish to support the series, make sure to have a look at our Patreon page. We kindly thank you for your contribution. It definitely helps keeping the series running. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 5.16, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ijola Ifehir."}, {"start": 5.16, "end": 10.18, "text": " Not so long ago, Google DeepMind introduced a novel learning algorithm that was able"}, {"start": 10.18, "end": 14.0, "text": " to reach superhuman levels in playing many Atari games."}, {"start": 14.0, "end": 17.32, "text": " It was a spectacular milestone in AI research."}, {"start": 17.32, "end": 22.28, "text": " Interestingly, while these learning algorithms are being improved at the staggering pace,"}, {"start": 22.28, "end": 27.0, "text": " there's a parallel subfield where researchers endeavor to break these learning systems"}, {"start": 27.0, "end": 30.36, "text": " by slightly changing the information they are presented with."}, {"start": 30.36, "end": 34.24, "text": " Fraudulent pampering with images or video feeds, if you will."}, {"start": 34.24, "end": 38.84, "text": " Imagine a system that is designed to identify what is seen in an image."}, {"start": 38.84, "end": 44.36, "text": " In an earlier episode, we discussed an adversarial algorithm where, in an amusing example,"}, {"start": 44.36, "end": 49.6, "text": " they added a tiny bit of barely perceptible noise to this image to make the deep neural"}, {"start": 49.6, "end": 53.480000000000004, "text": " network misidentify a bus for an ostrich."}, {"start": 53.48, "end": 59.04, "text": " Machine learning researchers like to call these evil forged images adversarial samples."}, {"start": 59.04, "end": 64.67999999999999, "text": " And now, this time around, OpenAI published a super fun piece of work to fool these game"}, {"start": 64.67999999999999, "end": 69.2, "text": " learning algorithms by changing some of their input visual information."}, {"start": 69.2, "end": 75.08, "text": " As you will see in a moment, it is so effective that by only using a tiny bit of information,"}, {"start": 75.08, "end": 79.84, "text": " it can turn a powerful learning algorithm into a blabbering idiot."}, {"start": 79.84, "end": 84.88000000000001, "text": " The first method adds a tiny bit of noise to a large portion of the video input, where"}, {"start": 84.88000000000001, "end": 89.52000000000001, "text": " the difference is barely perceptible, but it forces the learning algorithm to choose"}, {"start": 89.52000000000001, "end": 92.84, "text": " a different action that it would have chosen otherwise."}, {"start": 92.84, "end": 97.72, "text": " In the other one, a different modification was used that has a smaller footprint, for"}, {"start": 97.72, "end": 103.24000000000001, "text": " instance in Pong, adding a tiny fake ball to the game to coerce the learner into going"}, {"start": 103.24000000000001, "end": 106.56, "text": " down when it was originally planning to go up."}, {"start": 106.56, "end": 111.48, "text": " The algorithm is able to learn game specific knowledge for almost any other game to fool"}, {"start": 111.48, "end": 112.8, "text": " the player."}, {"start": 112.8, "end": 117.36, "text": " Despite the huge difference in the results, I love the elegant mathematical formulation"}, {"start": 117.36, "end": 122.68, "text": " of the two noise types, because despite the fact that they do something radically different,"}, {"start": 122.68, "end": 125.64, "text": " their mathematical formulation is quite similar."}, {"start": 125.64, "end": 130.56, "text": " Mathematicians like to say that we are solving the same problem while optimizing for different"}, {"start": 130.56, "end": 132.12, "text": " target norms."}, {"start": 132.12, "end": 136.84, "text": " One deep mind's deep-queue learning, two other high-quality learning algorithms are also"}, {"start": 136.84, "end": 138.52, "text": " fooled by this technique."}, {"start": 138.52, "end": 143.08, "text": " In the white box formulation, we have access to the inner workings of the algorithm."}, {"start": 143.08, "end": 148.32, "text": " But interestingly, a black box formulation is also proposed, where we know much less about"}, {"start": 148.32, "end": 154.0, "text": " the target system, but we know the game itself and we train our own system and look for weaknesses"}, {"start": 154.0, "end": 155.0, "text": " in that."}, {"start": 155.0, "end": 159.24, "text": " When we found these weak points, we used this knowledge to break other systems."}, {"start": 159.24, "end": 163.88, "text": " I can only imagine how much fun there was to be had for the authors when they were developing"}, {"start": 163.88, "end": 165.20000000000002, "text": " these techniques."}, {"start": 165.20000000000002, "end": 169.96, "text": " Super excited to see how these arms raised of creating more powerful learning algorithms"}, {"start": 169.96, "end": 175.16, "text": " and in response, more powerful adversarial techniques to break them develops."}, {"start": 175.16, "end": 179.96, "text": " In the future, I feel that the robustness of a learning algorithm, or in other words,"}, {"start": 179.96, "end": 185.0, "text": " its resilience against adversarial attacks will be just as important of a design factor"}, {"start": 185.0, "end": 186.92000000000002, "text": " as how powerful it is."}, {"start": 186.92, "end": 191.51999999999998, "text": " There are a ton of videos published on the authors' website, make sure to have a look."}, {"start": 191.51999999999998, "end": 196.07999999999998, "text": " And also, if you wish to support the series, make sure to have a look at our Patreon page."}, {"start": 196.07999999999998, "end": 198.32, "text": " We kindly thank you for your contribution."}, {"start": 198.32, "end": 200.67999999999998, "text": " It definitely helps keeping the series running."}, {"start": 200.67999999999998, "end": 202.95999999999998, "text": " Thanks for watching and for your generous support."}, {"start": 202.96, "end": 220.8, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=brs1qCDzRdk
Automatic Creation of Sketch Tutorials | Two Minute Papers #134
The paper "How2Sketch: Generating Easy-To-Follow Tutorials for Sketching 3D Objects" is available here: http://geometry.cs.ucl.ac.uk/projects/2017/how2sketch/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1582108/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Kato Ejonei Fahir. Have a look at this magnificent idea. The input is a digital 3D model of an object and the viewpoint of our choice and the output is an easy to follow step-by-step breakdown on how to draw it. Automated drawing tutorials. I wish tools like this were available back when I was a child. Awesome. This technique offers a way to create the scaffoldings to help achieving the correct perspective and positioning for the individual elements of the 3D model, something that novice artists often struggle with. This problem is particularly challenging because we have a bunch of competing solutions and we have to decide which one should be presented to the user. To achieve this, we have to include a sound mathematical description of how easy a drawing process is. The algorithm also makes adjustments to the individual parts of the model to make them easier to draw without introducing severe distortions to the shapes. The proposed technique uses graph theory to find a suitable ordering of the drawing steps. Beyond the scientific parts, there are a lot of usability issues to be taken into consideration. For instance, the algorithm should notify the user when a given guide is not to be used anymore and can be safely erased. novice, apprentice and adapt users are also to be handled differently. To show the validity of this solution, the authors made a user study where they tested this new tutorial type against the most common existing solution and found that the users were not only able to create more accurate drawings with it, but they were also enjoying the process more. I commend the authors for taking into consideration the overall experience of the drawing process, which is an incredibly important factor. If the user enjoyed the process, he'll surely come back for more and of course, the more we show up, the more we learn. Some of these tutorials are available on the website of the authors, as always, I've linked it in the video description. If you're in the mood to draw, make sure to give it a go and let us know how it went in the comment section. Hell, even I am now in the mood to give this a try. If I disappear for a while, you know where I am. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Kato Ejonei Fahir."}, {"start": 4.48, "end": 7.08, "text": " Have a look at this magnificent idea."}, {"start": 7.08, "end": 12.96, "text": " The input is a digital 3D model of an object and the viewpoint of our choice and the output"}, {"start": 12.96, "end": 17.72, "text": " is an easy to follow step-by-step breakdown on how to draw it."}, {"start": 17.72, "end": 19.48, "text": " Automated drawing tutorials."}, {"start": 19.48, "end": 23.28, "text": " I wish tools like this were available back when I was a child."}, {"start": 23.28, "end": 24.6, "text": " Awesome."}, {"start": 24.6, "end": 29.96, "text": " This technique offers a way to create the scaffoldings to help achieving the correct perspective"}, {"start": 29.96, "end": 35.56, "text": " and positioning for the individual elements of the 3D model, something that novice artists"}, {"start": 35.56, "end": 37.2, "text": " often struggle with."}, {"start": 37.2, "end": 41.760000000000005, "text": " This problem is particularly challenging because we have a bunch of competing solutions"}, {"start": 41.760000000000005, "end": 45.56, "text": " and we have to decide which one should be presented to the user."}, {"start": 45.56, "end": 51.040000000000006, "text": " To achieve this, we have to include a sound mathematical description of how easy a drawing"}, {"start": 51.040000000000006, "end": 52.28, "text": " process is."}, {"start": 52.28, "end": 57.0, "text": " The algorithm also makes adjustments to the individual parts of the model to make them"}, {"start": 57.0, "end": 61.44, "text": " easier to draw without introducing severe distortions to the shapes."}, {"start": 61.44, "end": 66.96000000000001, "text": " The proposed technique uses graph theory to find a suitable ordering of the drawing steps."}, {"start": 66.96000000000001, "end": 72.28, "text": " Beyond the scientific parts, there are a lot of usability issues to be taken into consideration."}, {"start": 72.28, "end": 77.24000000000001, "text": " For instance, the algorithm should notify the user when a given guide is not to be used"}, {"start": 77.24000000000001, "end": 80.08, "text": " anymore and can be safely erased."}, {"start": 80.08, "end": 85.24000000000001, "text": " novice, apprentice and adapt users are also to be handled differently."}, {"start": 85.24, "end": 89.83999999999999, "text": " To show the validity of this solution, the authors made a user study where they tested"}, {"start": 89.83999999999999, "end": 95.11999999999999, "text": " this new tutorial type against the most common existing solution and found that the users"}, {"start": 95.11999999999999, "end": 100.03999999999999, "text": " were not only able to create more accurate drawings with it, but they were also enjoying"}, {"start": 100.03999999999999, "end": 101.44, "text": " the process more."}, {"start": 101.44, "end": 107.16, "text": " I commend the authors for taking into consideration the overall experience of the drawing process,"}, {"start": 107.16, "end": 109.39999999999999, "text": " which is an incredibly important factor."}, {"start": 109.39999999999999, "end": 114.08, "text": " If the user enjoyed the process, he'll surely come back for more and of course, the more"}, {"start": 114.08, "end": 116.39999999999999, "text": " we show up, the more we learn."}, {"start": 116.39999999999999, "end": 120.52, "text": " Some of these tutorials are available on the website of the authors, as always, I've"}, {"start": 120.52, "end": 122.28, "text": " linked it in the video description."}, {"start": 122.28, "end": 126.12, "text": " If you're in the mood to draw, make sure to give it a go and let us know how it went"}, {"start": 126.12, "end": 127.12, "text": " in the comment section."}, {"start": 127.12, "end": 130.44, "text": " Hell, even I am now in the mood to give this a try."}, {"start": 130.44, "end": 132.96, "text": " If I disappear for a while, you know where I am."}, {"start": 132.96, "end": 151.92000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=u7kQ5lNfUfg
AI Makes Stunning Photos From Your Drawings (pix2pix) | Two Minute Papers #133
Online demo of pix2pix (try drawing there!): https://affinelayer.com/pixsrv/ The paper "Image-to-Image Translation with Conditional Adversarial Nets" and its source code is available here: https://phillipi.github.io/pix2pix/ Twitter: https://twitter.com/search?vertical=default&q=pix2pix&src=typd More amusing results: http://www.neogaf.com/forum/showthread.php?t=1346254&page=1 http://thechive.com/2017/02/22/this-drawing-to-image-machine-is-made-of-nightmares-17-photos/ Recommended for you: Image Editing with Generative Adversarial Networks - https://www.youtube.com/watch?v=pqkpIfu36Os WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-408746/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Kato Ejone Fahir. In an earlier work, we were able to change a photo of an already existing design according to our taste. That was absolutely amazing. But now, hold on to your papers and have a look at this. Because here, we can create something out of thin air. The input in this problem formulation is an image, and the output is an image of a different kind. Let's call this process image translation. It is translation in a sense that, for instance, we can add an aerial view of a city as an input and get a map of this city as an output. Or, we can draw the silhouette of a handbag and have it translated to an actual, real looking object. And we can go even crazier. For instance, day to night conversion of a photograph is also possible. But an incredible idea and look at the quality of the execution, ice cream for my eyes. And as always, please don't think of this algorithm as the end of the road. Like all papers, this is a stepping stone, and a few more works down the line, the kings will be fixed, and the output quality is going to be vastly improved. The technique uses a conditional adversarial network to accomplish this. This works the following way. There is a generative neural network that creates new images all day, and the discriminator network is also available all day to judge whether these images look natural or not. During this process, the generator network learns to draw more realistic images, and the discriminator network learns to tell fake images from real ones. If they train together for long enough, they will be able to reliably create these image translations for a large set of different scenarios. There are two key differences that make this piece of work stand out from the classical generative adversarial networks. One, both neural networks have the opportunity to look at the before and after images. Normally, we restrict the problem to only looking at the after images, the final results. And two, instead of only positive, both positive and negative examples are generated. This means that the generator network is also asked to create really bad images on purpose, so that the discriminator network can more reliably learn the distinction between flippant attempts and quality craftsmanship. Another great selling point here is that we don't need several different algorithms for each of the cases. The same generic approach is used for all the maps and photographs. The only thing that is different is the training data. Here has blown up with fun experiments, most of them include cute drawings ending up as horrifying looking cats. As the title of the video says, the results are always going to be stunning, but sometimes a different kind of stunning than we'd expect. It's so delightful to see that people are having a great time with this technique and it is always a great choice to put out such a work for a wide audience to play with. And if you get excited for this project, there are tons, and I mean tons of links in the video description, including one to the source code of the project, so make sure to have a look and read up some more on the topic there's going to be lots of fun to be had. You can also try it for yourself, there's a link to an online demo in the description, and if you post your results in the comment section, I guarantee there will be some amusing discussions. I feel that soon, a new era of video games and movies will dawn where most of the digital models are drawn by computers. Because automation and mass producing is a standard in many industries nowadays, we'll surely be hearing people going, do you remember the good old times when video games were handcrafted? Man, those were the days. If you enjoyed this episode, make sure to subscribe to the series, we try our best to put out two of these videos per week. We would be happy to have you join our growing club of fellow scholars and be a part of our journey to the world of incredible research works such as this one. Thanks for watching and for your generous support, I'll see you next time.
[{"start": 0.0, "end": 5.16, "text": " Dear Fellow Scholars, this is two minute papers with Kato Ejone Fahir."}, {"start": 5.16, "end": 10.66, "text": " In an earlier work, we were able to change a photo of an already existing design according"}, {"start": 10.66, "end": 12.02, "text": " to our taste."}, {"start": 12.02, "end": 14.26, "text": " That was absolutely amazing."}, {"start": 14.26, "end": 18.2, "text": " But now, hold on to your papers and have a look at this."}, {"start": 18.2, "end": 21.64, "text": " Because here, we can create something out of thin air."}, {"start": 21.64, "end": 27.240000000000002, "text": " The input in this problem formulation is an image, and the output is an image of a different"}, {"start": 27.240000000000002, "end": 28.240000000000002, "text": " kind."}, {"start": 28.24, "end": 31.159999999999997, "text": " Let's call this process image translation."}, {"start": 31.159999999999997, "end": 36.76, "text": " It is translation in a sense that, for instance, we can add an aerial view of a city as an"}, {"start": 36.76, "end": 40.64, "text": " input and get a map of this city as an output."}, {"start": 40.64, "end": 46.44, "text": " Or, we can draw the silhouette of a handbag and have it translated to an actual, real looking"}, {"start": 46.44, "end": 47.599999999999994, "text": " object."}, {"start": 47.599999999999994, "end": 49.72, "text": " And we can go even crazier."}, {"start": 49.72, "end": 54.12, "text": " For instance, day to night conversion of a photograph is also possible."}, {"start": 54.12, "end": 60.599999999999994, "text": " But an incredible idea and look at the quality of the execution, ice cream for my eyes."}, {"start": 60.599999999999994, "end": 65.08, "text": " And as always, please don't think of this algorithm as the end of the road."}, {"start": 65.08, "end": 69.56, "text": " Like all papers, this is a stepping stone, and a few more works down the line, the"}, {"start": 69.56, "end": 74.56, "text": " kings will be fixed, and the output quality is going to be vastly improved."}, {"start": 74.56, "end": 79.28, "text": " The technique uses a conditional adversarial network to accomplish this."}, {"start": 79.28, "end": 81.28, "text": " This works the following way."}, {"start": 81.28, "end": 86.44, "text": " There is a generative neural network that creates new images all day, and the discriminator"}, {"start": 86.44, "end": 92.72, "text": " network is also available all day to judge whether these images look natural or not."}, {"start": 92.72, "end": 97.96000000000001, "text": " During this process, the generator network learns to draw more realistic images, and the"}, {"start": 97.96000000000001, "end": 102.24000000000001, "text": " discriminator network learns to tell fake images from real ones."}, {"start": 102.24000000000001, "end": 107.04, "text": " If they train together for long enough, they will be able to reliably create these image"}, {"start": 107.04, "end": 110.6, "text": " translations for a large set of different scenarios."}, {"start": 110.6, "end": 114.88, "text": " There are two key differences that make this piece of work stand out from the classical"}, {"start": 114.88, "end": 117.47999999999999, "text": " generative adversarial networks."}, {"start": 117.47999999999999, "end": 123.52, "text": " One, both neural networks have the opportunity to look at the before and after images."}, {"start": 123.52, "end": 129.12, "text": " Normally, we restrict the problem to only looking at the after images, the final results."}, {"start": 129.12, "end": 135.32, "text": " And two, instead of only positive, both positive and negative examples are generated."}, {"start": 135.32, "end": 141.12, "text": " This means that the generator network is also asked to create really bad images on purpose,"}, {"start": 141.12, "end": 146.12, "text": " so that the discriminator network can more reliably learn the distinction between flippant"}, {"start": 146.12, "end": 149.0, "text": " attempts and quality craftsmanship."}, {"start": 149.0, "end": 152.95999999999998, "text": " Another great selling point here is that we don't need several different algorithms for"}, {"start": 152.95999999999998, "end": 154.35999999999999, "text": " each of the cases."}, {"start": 154.35999999999999, "end": 158.56, "text": " The same generic approach is used for all the maps and photographs."}, {"start": 158.56, "end": 161.72, "text": " The only thing that is different is the training data."}, {"start": 161.72, "end": 166.52, "text": " Here has blown up with fun experiments, most of them include cute drawings ending up"}, {"start": 166.52, "end": 168.8, "text": " as horrifying looking cats."}, {"start": 168.8, "end": 174.28, "text": " As the title of the video says, the results are always going to be stunning, but sometimes"}, {"start": 174.28, "end": 177.16, "text": " a different kind of stunning than we'd expect."}, {"start": 177.16, "end": 181.44, "text": " It's so delightful to see that people are having a great time with this technique and it is"}, {"start": 181.44, "end": 186.28, "text": " always a great choice to put out such a work for a wide audience to play with."}, {"start": 186.28, "end": 191.16, "text": " And if you get excited for this project, there are tons, and I mean tons of links in the"}, {"start": 191.16, "end": 195.68, "text": " video description, including one to the source code of the project, so make sure to have"}, {"start": 195.68, "end": 200.24, "text": " a look and read up some more on the topic there's going to be lots of fun to be had."}, {"start": 200.24, "end": 204.68, "text": " You can also try it for yourself, there's a link to an online demo in the description,"}, {"start": 204.68, "end": 209.35999999999999, "text": " and if you post your results in the comment section, I guarantee there will be some amusing"}, {"start": 209.35999999999999, "end": 210.35999999999999, "text": " discussions."}, {"start": 210.35999999999999, "end": 215.96, "text": " I feel that soon, a new era of video games and movies will dawn where most of the digital"}, {"start": 215.96, "end": 218.4, "text": " models are drawn by computers."}, {"start": 218.4, "end": 223.36, "text": " Because automation and mass producing is a standard in many industries nowadays, we'll surely"}, {"start": 223.36, "end": 229.16, "text": " be hearing people going, do you remember the good old times when video games were handcrafted?"}, {"start": 229.16, "end": 231.24, "text": " Man, those were the days."}, {"start": 231.24, "end": 236.28, "text": " If you enjoyed this episode, make sure to subscribe to the series, we try our best to put out"}, {"start": 236.28, "end": 238.12, "text": " two of these videos per week."}, {"start": 238.12, "end": 242.64000000000001, "text": " We would be happy to have you join our growing club of fellow scholars and be a part of"}, {"start": 242.64000000000001, "end": 247.04000000000002, "text": " our journey to the world of incredible research works such as this one."}, {"start": 247.04, "end": 250.64, "text": " Thanks for watching and for your generous support, I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=JzOc_NNY_zY
Real-Time Fiber-Level Cloth Rendering | Two Minute Papers #132
The paper "Real-time Fiber-level Cloth Rendering" is available here: http://www.cs.utah.edu/~kwu/rtfr.html WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credits: https://pixabay.com/photo-1856679/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. This piece of work shows us how to render a piece of cloth down to the level of fibers. This is a difficult problem because we need to be able to handle models that are built from potentially over a hundred million fiber curves. This technique supports a variety of goodies. One, level of detail is possible. This means that the closer we get to the cloth, the more details appear and that it is possible to create a highly optimized algorithm that doesn't render these details when they are not visible. This means a huge performance boost if we are zoomed out. Two, optimizations are introduced so that fiber-level self-shadows are computed in real time, which would normally be an extremely long process. Note that we are talking millions of fibers here. And three, the graphical card in your computer is amazingly effective at computing hundreds of things in parallel. However, its weak point is data transfer, at which it is woefully slow to the point that it is often worth recomputing multiple gigabytes of data right on it just to avoid uploading it to its memory again. This algorithm generates the fiber curves directly on the graphical card to minimize such data transfers and hence it maps really effectively to the graphical card. And the result is a remarkable technique that can render a piece of cloth down to the tiniest details with multiple different kinds of yarn models and in real time. What I really like about this piece of work is that this is not a stepping stone. This could be used in many state of the art systems as is right now. The authors also made the cloth models available for easier comparisons in follow-up research works. Thanks for watching and for your generous support. See you next time.
[{"start": 0.0, "end": 5.04, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 5.04, "end": 11.44, "text": " This piece of work shows us how to render a piece of cloth down to the level of fibers."}, {"start": 11.44, "end": 15.68, "text": " This is a difficult problem because we need to be able to handle models that are built"}, {"start": 15.68, "end": 19.32, "text": " from potentially over a hundred million fiber curves."}, {"start": 19.32, "end": 22.32, "text": " This technique supports a variety of goodies."}, {"start": 22.32, "end": 25.6, "text": " One, level of detail is possible."}, {"start": 25.6, "end": 30.92, "text": " This means that the closer we get to the cloth, the more details appear and that it is possible"}, {"start": 30.92, "end": 36.84, "text": " to create a highly optimized algorithm that doesn't render these details when they are not visible."}, {"start": 36.84, "end": 40.6, "text": " This means a huge performance boost if we are zoomed out."}, {"start": 40.6, "end": 47.760000000000005, "text": " Two, optimizations are introduced so that fiber-level self-shadows are computed in real time,"}, {"start": 47.760000000000005, "end": 50.84, "text": " which would normally be an extremely long process."}, {"start": 50.84, "end": 53.88, "text": " Note that we are talking millions of fibers here."}, {"start": 53.88, "end": 59.92, "text": " And three, the graphical card in your computer is amazingly effective at computing hundreds"}, {"start": 59.92, "end": 61.400000000000006, "text": " of things in parallel."}, {"start": 61.400000000000006, "end": 67.48, "text": " However, its weak point is data transfer, at which it is woefully slow to the point that"}, {"start": 67.48, "end": 73.68, "text": " it is often worth recomputing multiple gigabytes of data right on it just to avoid uploading"}, {"start": 73.68, "end": 75.32000000000001, "text": " it to its memory again."}, {"start": 75.32000000000001, "end": 81.0, "text": " This algorithm generates the fiber curves directly on the graphical card to minimize such data"}, {"start": 81.0, "end": 85.32, "text": " transfers and hence it maps really effectively to the graphical card."}, {"start": 85.32, "end": 90.36, "text": " And the result is a remarkable technique that can render a piece of cloth down to the"}, {"start": 90.36, "end": 96.4, "text": " tiniest details with multiple different kinds of yarn models and in real time."}, {"start": 96.4, "end": 100.72, "text": " What I really like about this piece of work is that this is not a stepping stone."}, {"start": 100.72, "end": 105.68, "text": " This could be used in many state of the art systems as is right now."}, {"start": 105.68, "end": 110.92, "text": " The authors also made the cloth models available for easier comparisons in follow-up research"}, {"start": 110.92, "end": 111.92, "text": " works."}, {"start": 111.92, "end": 113.92, "text": " Thanks for watching and for your generous support."}, {"start": 113.92, "end": 130.68, "text": " See you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZUa5sNVSjGw
Shape and Material from Video | Two Minute Papers #131
The paper "Recovering Shape and Spatially-Varying Surface Reflectance under Unknown Illumination" is available here: http://www.cs.wm.edu/~ppeers/showPublication.php?id=Xia:2016:RSS WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://www.flickr.com/photos/8143264@N08/4946656511/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karoizuna Ifeher. Imagine the following. We put an object on a robot arm and the input is a recorded video of it. And the output would be the digital geometry and the material model for this object. This geometry and material model, we can plug into a photorealistic light simulation program to have a digital copy of our real world object. First, we have to be wary of the fact that normally solving such a problem sounds completely hopeless. We have three variables that we have to take into consideration. The lighting in the room, the geometry of the object, and the material properties of the object. If any two of the three variables is known, the problem is relatively simple and there are already existing works to address these combinations. For instance, if we create a studio lighting setup and know the geometry of the object as well, it is not that difficult to capture the material properties. Also, if the material properties and lighting is known, there are methods to extract the geometry of the object. However, this is a way more difficult formulation of the problem because out of the three variables, not two, not one, but zero are known. We don't have control over the lighting, the material properties can be arbitrary and the geometry can also be anything. Several very sensible assumptions are being made, such as that our camera has to be stationary and the rotation directions of the object should be known and some more of these are discussed in the paper in detail. All of them are quite sensible and they don't feel limiting. The algorithm works the following way. In the first step, we estimate the lighting and leaning on this estimation, we build a rough initial surface model and in the second step, using this surface model, we see a bit clearer and therefore we can refine our initial guess for the lighting and the material model. However, now that we know the lighting a bit better, we can again get back to the surface reconstruction and improve our solution there. This entire process happens iteratively, which means that first we obtain a very rough initial guess for the surface and we constantly refine this piece of surface to get closer and closer to the final solution. And now, feast your eyes on these incredible results and marvel at the fact that we know next to nothing about the input and the geometry and material properties almost magically appear on the screen. This is an amazing piece of work and lots and lots of details and results are discussed in the paper, which is quite well written and was a joy to read. Make sure to have a look at it. The link is available in the video description. Also, thanks for all the kind comments I've experienced a recent influx of emails from people all around the world expressing how they are enjoying the series and many of them telling their personal stories and their relation to science and how these new inventions are being talked about over dinner with the family and relatives. It has been such a delight to read these messages. Thank you. Thanks for watching and for your generous support.
[{"start": 0.0, "end": 5.24, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karoizuna Ifeher."}, {"start": 5.24, "end": 6.5600000000000005, "text": " Imagine the following."}, {"start": 6.5600000000000005, "end": 11.56, "text": " We put an object on a robot arm and the input is a recorded video of it."}, {"start": 11.56, "end": 17.04, "text": " And the output would be the digital geometry and the material model for this object."}, {"start": 17.04, "end": 22.68, "text": " This geometry and material model, we can plug into a photorealistic light simulation program"}, {"start": 22.68, "end": 25.68, "text": " to have a digital copy of our real world object."}, {"start": 25.68, "end": 31.6, "text": " First, we have to be wary of the fact that normally solving such a problem sounds completely"}, {"start": 31.6, "end": 32.6, "text": " hopeless."}, {"start": 32.6, "end": 35.88, "text": " We have three variables that we have to take into consideration."}, {"start": 35.88, "end": 40.76, "text": " The lighting in the room, the geometry of the object, and the material properties of the"}, {"start": 40.76, "end": 41.760000000000005, "text": " object."}, {"start": 41.760000000000005, "end": 46.72, "text": " If any two of the three variables is known, the problem is relatively simple and there"}, {"start": 46.72, "end": 50.2, "text": " are already existing works to address these combinations."}, {"start": 50.2, "end": 54.96, "text": " For instance, if we create a studio lighting setup and know the geometry of the object"}, {"start": 54.96, "end": 59.2, "text": " as well, it is not that difficult to capture the material properties."}, {"start": 59.2, "end": 64.56, "text": " Also, if the material properties and lighting is known, there are methods to extract the"}, {"start": 64.56, "end": 66.4, "text": " geometry of the object."}, {"start": 66.4, "end": 72.6, "text": " However, this is a way more difficult formulation of the problem because out of the three variables,"}, {"start": 72.6, "end": 77.44, "text": " not two, not one, but zero are known."}, {"start": 77.44, "end": 82.92, "text": " We don't have control over the lighting, the material properties can be arbitrary and"}, {"start": 82.92, "end": 86.0, "text": " the geometry can also be anything."}, {"start": 86.0, "end": 91.84, "text": " Several very sensible assumptions are being made, such as that our camera has to be stationary"}, {"start": 91.84, "end": 96.56, "text": " and the rotation directions of the object should be known and some more of these are discussed"}, {"start": 96.56, "end": 98.16, "text": " in the paper in detail."}, {"start": 98.16, "end": 101.8, "text": " All of them are quite sensible and they don't feel limiting."}, {"start": 101.8, "end": 104.0, "text": " The algorithm works the following way."}, {"start": 104.0, "end": 108.96000000000001, "text": " In the first step, we estimate the lighting and leaning on this estimation, we build a"}, {"start": 108.96, "end": 114.83999999999999, "text": " rough initial surface model and in the second step, using this surface model, we see a bit"}, {"start": 114.83999999999999, "end": 121.32, "text": " clearer and therefore we can refine our initial guess for the lighting and the material model."}, {"start": 121.32, "end": 126.8, "text": " However, now that we know the lighting a bit better, we can again get back to the surface"}, {"start": 126.8, "end": 129.92, "text": " reconstruction and improve our solution there."}, {"start": 129.92, "end": 135.84, "text": " This entire process happens iteratively, which means that first we obtain a very rough"}, {"start": 135.84, "end": 141.08, "text": " initial guess for the surface and we constantly refine this piece of surface to get closer"}, {"start": 141.08, "end": 148.68, "text": " and closer to the final solution."}, {"start": 148.68, "end": 154.24, "text": " And now, feast your eyes on these incredible results and marvel at the fact that we know"}, {"start": 154.24, "end": 160.72, "text": " next to nothing about the input and the geometry and material properties almost magically appear"}, {"start": 160.72, "end": 162.04, "text": " on the screen."}, {"start": 162.04, "end": 166.92, "text": " This is an amazing piece of work and lots and lots of details and results are discussed"}, {"start": 166.92, "end": 170.92, "text": " in the paper, which is quite well written and was a joy to read."}, {"start": 170.92, "end": 172.16, "text": " Make sure to have a look at it."}, {"start": 172.16, "end": 174.35999999999999, "text": " The link is available in the video description."}, {"start": 174.35999999999999, "end": 180.12, "text": " Also, thanks for all the kind comments I've experienced a recent influx of emails from"}, {"start": 180.12, "end": 185.35999999999999, "text": " people all around the world expressing how they are enjoying the series and many of them"}, {"start": 185.35999999999999, "end": 190.76, "text": " telling their personal stories and their relation to science and how these new inventions"}, {"start": 190.76, "end": 194.35999999999999, "text": " are being talked about over dinner with the family and relatives."}, {"start": 194.35999999999999, "end": 197.32, "text": " It has been such a delight to read these messages."}, {"start": 197.32, "end": 198.32, "text": " Thank you."}, {"start": 198.32, "end": 225.07999999999998, "text": " Thanks for watching and for your generous support."}]
Two Minute Papers
https://www.youtube.com/watch?v=psOPu3TldgY
Learning to Fill Holes in Images | Two Minute Papers #130
The paper "Scene Completion Using Millions of Photographs" is available here: http://graphics.cs.cmu.edu/projects/scene-completion/scene-completion.pdf WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Claudio Fernandes, Daniel John Benton, Dave Rushton-Smith, Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1984308/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. This paper is from 2007, from 10 years ago, and I'm sure you'll be surprised by how well it is holding up to today's standards. For me, it was one of the works that foreshadowed the incredible power of data-driven learning algorithms. So, let's grab an image and cut a sizable part out of it and try to algorithmically fill it with data that makes sense. Using a drunk photo-bombing friend from your wedding picture or a building blocking a beautiful view to the sea are excellent and honestly, painfully real examples of this. This problem we like to call image completion or image in painting. But mathematically, this may sound like crazy talk, who really knows what information should be there in these holes, let alone a computer. The first question is, why would we have to synthesize all these missing details from scratch? Why not start looking around in an enormous database of photographs and look for something similar? For instance, let's unleash a learning algorithm on one million images. And if we do so, we could find that there may be photographs in the database that are from the same place. But then, what about the illumination? The lighting may be different. Well, this is an enormous database, so then we pick a photo that was taken at a similar time of the day and use that information. And as we can see in the results, the technique works like magic. Awesome! It doesn't require user-made annotations or any sort of manual labor. These results were way, way ahead of the competition. And sometimes, the algorithm proposes a set of solutions that we can choose from. The main challenge of this solution is finding similar images within the database, and fortunately, on a trivial technique that we call nearest neighbor search, can rapidly eliminate 99.99% of the dissimilar images. The paper also discusses some of the failure cases, which arise mostly from the lack of high-level semantic information. For instance, when we have to finish people, which is clearly not what this technique is meant to do, unless it's a statue of a famous person with many photographs taken in the database. Good that we are in 2017, and we know that plenty of research groups are already working on this, and I wouldn't be surprised to see a generative adversarial network-based technique to pop up for this in the very near future. Thanks for watching, and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 5.12, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 5.12, "end": 11.28, "text": " This paper is from 2007, from 10 years ago, and I'm sure you'll be surprised by how well"}, {"start": 11.28, "end": 13.92, "text": " it is holding up to today's standards."}, {"start": 13.92, "end": 19.44, "text": " For me, it was one of the works that foreshadowed the incredible power of data-driven learning"}, {"start": 19.44, "end": 20.44, "text": " algorithms."}, {"start": 20.44, "end": 26.400000000000002, "text": " So, let's grab an image and cut a sizable part out of it and try to algorithmically fill"}, {"start": 26.400000000000002, "end": 29.400000000000002, "text": " it with data that makes sense."}, {"start": 29.4, "end": 33.9, "text": " Using a drunk photo-bombing friend from your wedding picture or a building blocking"}, {"start": 33.9, "end": 40.28, "text": " a beautiful view to the sea are excellent and honestly, painfully real examples of this."}, {"start": 40.28, "end": 44.72, "text": " This problem we like to call image completion or image in painting."}, {"start": 44.72, "end": 50.32, "text": " But mathematically, this may sound like crazy talk, who really knows what information should"}, {"start": 50.32, "end": 53.480000000000004, "text": " be there in these holes, let alone a computer."}, {"start": 53.480000000000004, "end": 59.16, "text": " The first question is, why would we have to synthesize all these missing details from scratch?"}, {"start": 59.16, "end": 64.32, "text": " Why not start looking around in an enormous database of photographs and look for something"}, {"start": 64.32, "end": 65.32, "text": " similar?"}, {"start": 65.32, "end": 70.24, "text": " For instance, let's unleash a learning algorithm on one million images."}, {"start": 70.24, "end": 75.36, "text": " And if we do so, we could find that there may be photographs in the database that are"}, {"start": 75.36, "end": 77.36, "text": " from the same place."}, {"start": 77.36, "end": 79.92, "text": " But then, what about the illumination?"}, {"start": 79.92, "end": 81.6, "text": " The lighting may be different."}, {"start": 81.6, "end": 86.84, "text": " Well, this is an enormous database, so then we pick a photo that was taken at a similar"}, {"start": 86.84, "end": 89.92, "text": " time of the day and use that information."}, {"start": 89.92, "end": 94.16, "text": " And as we can see in the results, the technique works like magic."}, {"start": 94.16, "end": 95.16, "text": " Awesome!"}, {"start": 95.16, "end": 99.88000000000001, "text": " It doesn't require user-made annotations or any sort of manual labor."}, {"start": 99.88000000000001, "end": 103.60000000000001, "text": " These results were way, way ahead of the competition."}, {"start": 103.60000000000001, "end": 108.4, "text": " And sometimes, the algorithm proposes a set of solutions that we can choose from."}, {"start": 108.4, "end": 114.68, "text": " The main challenge of this solution is finding similar images within the database, and fortunately,"}, {"start": 114.68, "end": 121.80000000000001, "text": " on a trivial technique that we call nearest neighbor search, can rapidly eliminate 99.99%"}, {"start": 121.80000000000001, "end": 123.72000000000001, "text": " of the dissimilar images."}, {"start": 123.72000000000001, "end": 129.0, "text": " The paper also discusses some of the failure cases, which arise mostly from the lack of"}, {"start": 129.0, "end": 131.36, "text": " high-level semantic information."}, {"start": 131.36, "end": 135.68, "text": " For instance, when we have to finish people, which is clearly not what this technique is"}, {"start": 135.68, "end": 141.12, "text": " meant to do, unless it's a statue of a famous person with many photographs taken in the"}, {"start": 141.12, "end": 142.12, "text": " database."}, {"start": 142.12, "end": 148.16, "text": " Good that we are in 2017, and we know that plenty of research groups are already working"}, {"start": 148.16, "end": 152.8, "text": " on this, and I wouldn't be surprised to see a generative adversarial network-based"}, {"start": 152.8, "end": 155.88, "text": " technique to pop up for this in the very near future."}, {"start": 155.88, "end": 175.84, "text": " Thanks for watching, and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=kf-KViOuktc
AI Builds 3D Models From Images With a Twist | Two Minute Papers #129
The paper "IM2CAD" is available here: http://homes.cs.washington.edu/~izadinia/im2cad.html LSUN Challenge datasets: http://lsun.cs.princeton.edu/2016/ More related papers are available here: http://www.cs.toronto.edu/~fidler/projects/rent3D.html http://web.engr.illinois.edu/~slazebni/publications/iccv15_informative.pdf http://ieeexplore.ieee.org/document/6619238/?reload=true WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-389254/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. CAD stands for Computer Aided Design, basically a digital 3D model of a scene. Here is an incredibly difficult problem. What if we give the computer a photograph of a living room and the output would be a digital, fully modeled 3D scene, a CAD model? This is a remarkably difficult problem. Just think about it, the algorithm would have to have an understanding of perspective, illumination, occlusions and geometry. And with that in mind, have a look at these incredible results. Now, this clearly sounds impossible. Not so long ago, we talked about a neural network-based technique that tried to achieve something similar, and it was remarkable. But the output was a low-resolution voxel array, which is kind of like an approximate model built by children from a few large LEGO pieces. The link to this work is available in the video description. But clearly, we can do better. So how could this be possible? Well, the most important observation is that if we take a photograph of a room, there is a high chance that the furniture within are not custom built, but mostly commercially available pieces. So who said that we have to build these models from scratch? Let's look into a database that contains the geometry for publicly available furniture pieces and find which ones are seen in the image. So here's what we do. Given a large amount of training samples, neural networks are adept at recognizing objects on a photograph. That would be step number one. After the identification, the algorithm knows where the object is. Now, we are interested in what it looks like and how it is aligned. And then, we start to look up public furniture databases for objects that are as similar to the ones presented in the photo as possible. Finally, we put everything in its appropriate place and create a new digital image with a light simulation program. This is an iterative algorithm, which means that it starts out with a course initial guess that is being refined many, many times until some sort of convergence is reached. This means that no matter how hard we try, only minor improvements can be made to this solution. Then, we can stop. And here, the dissimilarity between the photograph and the digitally rendered image was subject to minimization. This entire process of creating the 3D geometry of the scene takes around 5 minutes. And this technique can also estimate the layout of a room from this one photograph. Now, this algorithm is absolutely amazing. But of course, the limitations are also to be candidly discussed. While some failure cases arise from misjudging the alignment of the objects, the technique is generally quite robust. Non-cubic room shapes are also likely to introduce issues, such as the omission or misplacement of an object. Also, kitchens and bathrooms are not yet supported. Note that this is not the only paper solving this problem. I've made sure to link some more related papers in the video description for your enjoyment. If you have found this interesting, make sure to subscribe and stay tuned for more 2-minute paper's episodes. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.12, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 5.12, "end": 11.16, "text": " CAD stands for Computer Aided Design, basically a digital 3D model of a scene."}, {"start": 11.16, "end": 13.88, "text": " Here is an incredibly difficult problem."}, {"start": 13.88, "end": 19.64, "text": " What if we give the computer a photograph of a living room and the output would be a digital,"}, {"start": 19.64, "end": 23.080000000000002, "text": " fully modeled 3D scene, a CAD model?"}, {"start": 23.080000000000002, "end": 25.76, "text": " This is a remarkably difficult problem."}, {"start": 25.76, "end": 31.040000000000003, "text": " Just think about it, the algorithm would have to have an understanding of perspective,"}, {"start": 31.040000000000003, "end": 34.120000000000005, "text": " illumination, occlusions and geometry."}, {"start": 34.120000000000005, "end": 38.120000000000005, "text": " And with that in mind, have a look at these incredible results."}, {"start": 38.120000000000005, "end": 41.32, "text": " Now, this clearly sounds impossible."}, {"start": 41.32, "end": 47.32, "text": " Not so long ago, we talked about a neural network-based technique that tried to achieve something similar,"}, {"start": 47.32, "end": 48.84, "text": " and it was remarkable."}, {"start": 48.84, "end": 54.56, "text": " But the output was a low-resolution voxel array, which is kind of like an approximate model"}, {"start": 54.56, "end": 58.04, "text": " built by children from a few large LEGO pieces."}, {"start": 58.04, "end": 61.120000000000005, "text": " The link to this work is available in the video description."}, {"start": 61.120000000000005, "end": 63.36, "text": " But clearly, we can do better."}, {"start": 63.36, "end": 65.64, "text": " So how could this be possible?"}, {"start": 65.64, "end": 70.48, "text": " Well, the most important observation is that if we take a photograph of a room, there"}, {"start": 70.48, "end": 76.16, "text": " is a high chance that the furniture within are not custom built, but mostly commercially"}, {"start": 76.16, "end": 77.72, "text": " available pieces."}, {"start": 77.72, "end": 81.36, "text": " So who said that we have to build these models from scratch?"}, {"start": 81.36, "end": 86.08, "text": " Let's look into a database that contains the geometry for publicly available furniture"}, {"start": 86.08, "end": 90.2, "text": " pieces and find which ones are seen in the image."}, {"start": 90.2, "end": 91.36, "text": " So here's what we do."}, {"start": 91.36, "end": 97.28, "text": " Given a large amount of training samples, neural networks are adept at recognizing objects"}, {"start": 97.28, "end": 98.68, "text": " on a photograph."}, {"start": 98.68, "end": 100.72, "text": " That would be step number one."}, {"start": 100.72, "end": 104.76, "text": " After the identification, the algorithm knows where the object is."}, {"start": 104.76, "end": 109.36, "text": " Now, we are interested in what it looks like and how it is aligned."}, {"start": 109.36, "end": 114.56, "text": " And then, we start to look up public furniture databases for objects that are as similar"}, {"start": 114.56, "end": 117.96, "text": " to the ones presented in the photo as possible."}, {"start": 117.96, "end": 123.16, "text": " Finally, we put everything in its appropriate place and create a new digital image with"}, {"start": 123.16, "end": 125.03999999999999, "text": " a light simulation program."}, {"start": 125.03999999999999, "end": 130.32, "text": " This is an iterative algorithm, which means that it starts out with a course initial"}, {"start": 130.32, "end": 137.07999999999998, "text": " guess that is being refined many, many times until some sort of convergence is reached."}, {"start": 137.08, "end": 142.28, "text": " This means that no matter how hard we try, only minor improvements can be made to this"}, {"start": 142.28, "end": 143.28, "text": " solution."}, {"start": 143.28, "end": 145.08, "text": " Then, we can stop."}, {"start": 145.08, "end": 150.20000000000002, "text": " And here, the dissimilarity between the photograph and the digitally rendered image was subject"}, {"start": 150.20000000000002, "end": 151.64000000000001, "text": " to minimization."}, {"start": 151.64000000000001, "end": 158.0, "text": " This entire process of creating the 3D geometry of the scene takes around 5 minutes."}, {"start": 158.0, "end": 163.52, "text": " And this technique can also estimate the layout of a room from this one photograph."}, {"start": 163.52, "end": 167.32000000000002, "text": " Now, this algorithm is absolutely amazing."}, {"start": 167.32000000000002, "end": 171.04000000000002, "text": " But of course, the limitations are also to be candidly discussed."}, {"start": 171.04000000000002, "end": 176.36, "text": " While some failure cases arise from misjudging the alignment of the objects, the technique"}, {"start": 176.36, "end": 178.56, "text": " is generally quite robust."}, {"start": 178.56, "end": 184.28, "text": " Non-cubic room shapes are also likely to introduce issues, such as the omission or misplacement"}, {"start": 184.28, "end": 185.56, "text": " of an object."}, {"start": 185.56, "end": 188.36, "text": " Also, kitchens and bathrooms are not yet supported."}, {"start": 188.36, "end": 191.76000000000002, "text": " Note that this is not the only paper solving this problem."}, {"start": 191.76, "end": 196.88, "text": " I've made sure to link some more related papers in the video description for your enjoyment."}, {"start": 196.88, "end": 201.51999999999998, "text": " If you have found this interesting, make sure to subscribe and stay tuned for more 2-minute"}, {"start": 201.51999999999998, "end": 202.79999999999998, "text": " paper's episodes."}, {"start": 202.8, "end": 222.60000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=LmYKfU5O_NA
Digital Creatures Learn to Cooperate | Two Minute Papers #128
The paper "Discovery of complex behaviors through contact-invariant optimization" is available here: http://homes.cs.washington.edu/~todorov/papers/MordatchSIGGRAPH12.pdf http://homes.cs.washington.edu/~todorov/papers.html Our earlier episode on optimization: https://www.youtube.com/watch?v=1ypV5ZiIbdA Our technical write-up on our video rendering pipeline changes is available here: https://www.patreon.com/posts/improvements-for-7607896 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-768641/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karoi Zsolnai-Fehir. Here's a really cool computer animation problem you'll love this one. We take a digital character, specify an initial pose and a target objective, for instance, a position somewhere in space. And the algorithm has to come up with a series of smooth movements and contact interactions to obtain this goal. But these movements have to be physically meaningful, such as that self-intersection and non-natural contortions have to be avoided throughout this process. Keep an eye out for the wide cross to see where the target positions are. Now, for starters, it's cool to see that humanoids are handled quite well, but here comes the super fun part. The mathematical formulation of this optimization problem does not depend on the body type at all, therefore both humanoids and almost arbitrarily crazy non-humanoid creatures are also supported. If we make this problem a bit more interesting and make changes to the terrain to torment these creatures, we'll notice that they come up with sensible movements to overcome these challenges. These results are absolutely amazing, and this includes obtaining highly non-trivial target poses such as handstands. The goal does not necessarily have to be a position, but it can be an orientation or a given pose as well. We can even add multiple characters to the environment and ask them to join forces to accomplish a task together. And here you can see that both characters take into consideration the actions of the other one and not only compensate accordingly, but they make sure that this happens in a way that brings them closer to their objective. It is truly incredible to see how these digital creatures can learn such complex animations in a matter of minutes, a true testament to the power of mathematical optimization algorithms. If you wish to hear more about how optimization works, we've had a previous episode on this topic, make sure to check it out. It includes a rigorous mathematical study on how to make the perfect vegetables too. The link is available in the video description. And if you feel a bit addicted to two minute papers, please note that these episodes are available in Early Access through Patreon. Click on the icon with the P at the ending screen if you're interested. It also helps us a great deal in improving the quality of the series. We try to be as transparent as possible, and every now and then we write a technical memo to summarize the recent improvements we were able to make, and this is all thanks to you. If you're interested, I've put a link to the latest post in the video description. Thanks for watching and for your generous support. I'll see you next time!
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karoi Zsolnai-Fehir."}, {"start": 5.0, "end": 9.0, "text": " Here's a really cool computer animation problem you'll love this one."}, {"start": 9.0, "end": 17.0, "text": " We take a digital character, specify an initial pose and a target objective, for instance, a position somewhere in space."}, {"start": 17.0, "end": 24.0, "text": " And the algorithm has to come up with a series of smooth movements and contact interactions to obtain this goal."}, {"start": 24.0, "end": 33.0, "text": " But these movements have to be physically meaningful, such as that self-intersection and non-natural contortions have to be avoided throughout this process."}, {"start": 33.0, "end": 37.0, "text": " Keep an eye out for the wide cross to see where the target positions are."}, {"start": 37.0, "end": 44.0, "text": " Now, for starters, it's cool to see that humanoids are handled quite well, but here comes the super fun part."}, {"start": 44.0, "end": 50.0, "text": " The mathematical formulation of this optimization problem does not depend on the body type at all,"}, {"start": 50.0, "end": 58.0, "text": " therefore both humanoids and almost arbitrarily crazy non-humanoid creatures are also supported."}, {"start": 58.0, "end": 64.0, "text": " If we make this problem a bit more interesting and make changes to the terrain to torment these creatures,"}, {"start": 64.0, "end": 69.0, "text": " we'll notice that they come up with sensible movements to overcome these challenges."}, {"start": 69.0, "end": 77.0, "text": " These results are absolutely amazing, and this includes obtaining highly non-trivial target poses such as handstands."}, {"start": 77.0, "end": 84.0, "text": " The goal does not necessarily have to be a position, but it can be an orientation or a given pose as well."}, {"start": 90.0, "end": 97.0, "text": " We can even add multiple characters to the environment and ask them to join forces to accomplish a task together."}, {"start": 97.0, "end": 110.0, "text": " And here you can see that both characters take into consideration the actions of the other one and not only compensate accordingly, but they make sure that this happens in a way that brings them closer to their objective."}, {"start": 110.0, "end": 122.0, "text": " It is truly incredible to see how these digital creatures can learn such complex animations in a matter of minutes, a true testament to the power of mathematical optimization algorithms."}, {"start": 122.0, "end": 129.0, "text": " If you wish to hear more about how optimization works, we've had a previous episode on this topic, make sure to check it out."}, {"start": 129.0, "end": 134.0, "text": " It includes a rigorous mathematical study on how to make the perfect vegetables too."}, {"start": 134.0, "end": 136.0, "text": " The link is available in the video description."}, {"start": 136.0, "end": 144.0, "text": " And if you feel a bit addicted to two minute papers, please note that these episodes are available in Early Access through Patreon."}, {"start": 144.0, "end": 148.0, "text": " Click on the icon with the P at the ending screen if you're interested."}, {"start": 148.0, "end": 152.0, "text": " It also helps us a great deal in improving the quality of the series."}, {"start": 152.0, "end": 162.0, "text": " We try to be as transparent as possible, and every now and then we write a technical memo to summarize the recent improvements we were able to make, and this is all thanks to you."}, {"start": 162.0, "end": 166.0, "text": " If you're interested, I've put a link to the latest post in the video description."}, {"start": 166.0, "end": 168.0, "text": " Thanks for watching and for your generous support."}, {"start": 168.0, "end": 186.0, "text": " I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=XbuEYcFfl6s
How Do Hollywood Movies Render Smoke? | Two Minute Papers #127
The paper "Importance Sampling Techniques for Path Tracing in Participating Media" is available here: https://www.solidangle.com/research/egsr2012_volume.pdf Implementation in 2k (binary + video without code): https://users.cg.tuwien.ac.at/zsolnai/gfx/volumetric-path-tracing-with-equiangular-sampling-in-2k/ Solid Angle (Arnold renderer) webpage + Oscar award headline: https://www.solidangle.com/ https://www.solidangle.com/news/2017-scitech-award/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Rendered scene credits: Bedroom - http://www.blendswap.com/blends/view/17385 Skin - http://www.blendswap.com/blends/view/84082 Shadertoy - https://www.shadertoy.com/view/lsV3zV Thumbnail background image credit: https://pixabay.com/photo-690293/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. How do people create these beautiful computer-generated images and videos that we see in Hollywood blockbuster movies? In the world of light simulation programs, to obtain a photorealistic image, we create a digital copy of a scene, add the camera and the light source, and simulate the paths of millions of light rays between the camera and the light sources. This technique would like to call path tracing, and it may take several minutes to obtain only one image on a powerful computer. However, in these simulations, the rays of light are allowed to bounce off of the surface of objects. In reality, many objects are volumes, where the rays of light can penetrate their surface and scatter around before exiting or being absorbed. Examples include not only rendering amazingly huge smoke plumes and haze, but all kinds of translucent objects, like our skin, marble, wax, and many others. Such an extended simulation program we call not path tracing, but volumetric path tracing, and we can create even more beautiful images with it. However, this comes at a steep price. If the classical path tracing took several minutes per image, this addition of complexity often bumps up the execution time to several hours. In order to save time, we have to realize that not all light paths contribute equally to our image. Many of them carry barely any contributions, and only a tiny, tiny fraction of these paths carry the majority of the information that we see in these images. So what if we could create algorithms that know exactly where to look for these high value light paths and systematically focus on them? This family of techniques we call important sampling methods. These help us finding the regions where light is concentrated, if you will. This piece of work is an excellent new way of doing important sampling for volumetric path tracing, and it works by identifying and focusing on regions that are most likely to scale. And it beats the already existing important sampling techniques with ease. Now, to demonstrate how simple this technique is, a few years ago, during a discussion with one of the authors, Marcos Fajardo, I told him that I would implement their method in real time on the graphical card in a smaller than 4 kilobytes program. So we made a bet. 4 kilobytes is so little, we can store only a fraction of a second of MP3 music in it. Also, this is an empty file generated with Windows Word. Apparently, in some software systems, the definition of nothing takes several times more than 4 kilobytes. And, after a bit of experimentation, I was quite stunned by the results because the final result was less than 2 kilobytes, even if support for some rudimentary animations is added. The whole computer program that executes volumetric path tracing with this acquiangular important sampling technique fits on your business card twice. Absolute insanity. I've put a link discussing some details in the video description. Now, don't be fooled by the simplicity of the presentation here. The heart and soul of the algorithm that created the rocket launch scene in Manning Black 3 is the same as this one. Due to legal reasons, it is not advisable to show it to you in this video, but this is fortunately one more excellent reason for you to have a look at the paper. As always, the link is available in the video description. This is my favorite kind of paper where there are remarkably large gains to be had, and it can be easily added to pretty much any light simulation program out there. I often like to say that the value over complexity ratio is tending towards infinity. This work is a prime example of that. By the way, Marcos and his team recently won a technical Oscar award not only for this, but for their decades of hard work on their Arnold renderer, which is behind many, many Hollywood productions. I've put a link to their website in the video description as well. Have a look, congrats guys! Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.0, "end": 13.0, "text": " How do people create these beautiful computer-generated images and videos that we see in Hollywood blockbuster movies?"}, {"start": 13.0, "end": 18.0, "text": " In the world of light simulation programs, to obtain a photorealistic image,"}, {"start": 18.0, "end": 23.0, "text": " we create a digital copy of a scene, add the camera and the light source,"}, {"start": 23.0, "end": 28.0, "text": " and simulate the paths of millions of light rays between the camera and the light sources."}, {"start": 28.0, "end": 36.0, "text": " This technique would like to call path tracing, and it may take several minutes to obtain only one image on a powerful computer."}, {"start": 36.0, "end": 43.0, "text": " However, in these simulations, the rays of light are allowed to bounce off of the surface of objects."}, {"start": 43.0, "end": 53.0, "text": " In reality, many objects are volumes, where the rays of light can penetrate their surface and scatter around before exiting or being absorbed."}, {"start": 53.0, "end": 65.0, "text": " Examples include not only rendering amazingly huge smoke plumes and haze, but all kinds of translucent objects, like our skin, marble, wax, and many others."}, {"start": 65.0, "end": 75.0, "text": " Such an extended simulation program we call not path tracing, but volumetric path tracing, and we can create even more beautiful images with it."}, {"start": 75.0, "end": 78.0, "text": " However, this comes at a steep price."}, {"start": 78.0, "end": 88.0, "text": " If the classical path tracing took several minutes per image, this addition of complexity often bumps up the execution time to several hours."}, {"start": 88.0, "end": 94.0, "text": " In order to save time, we have to realize that not all light paths contribute equally to our image."}, {"start": 94.0, "end": 104.0, "text": " Many of them carry barely any contributions, and only a tiny, tiny fraction of these paths carry the majority of the information that we see in these images."}, {"start": 104.0, "end": 114.0, "text": " So what if we could create algorithms that know exactly where to look for these high value light paths and systematically focus on them?"}, {"start": 114.0, "end": 118.0, "text": " This family of techniques we call important sampling methods."}, {"start": 118.0, "end": 122.0, "text": " These help us finding the regions where light is concentrated, if you will."}, {"start": 122.0, "end": 133.0, "text": " This piece of work is an excellent new way of doing important sampling for volumetric path tracing, and it works by identifying and focusing on regions that are most likely to scale."}, {"start": 133.0, "end": 140.0, "text": " And it beats the already existing important sampling techniques with ease."}, {"start": 140.0, "end": 148.0, "text": " Now, to demonstrate how simple this technique is, a few years ago, during a discussion with one of the authors, Marcos Fajardo,"}, {"start": 148.0, "end": 156.0, "text": " I told him that I would implement their method in real time on the graphical card in a smaller than 4 kilobytes program."}, {"start": 156.0, "end": 158.0, "text": " So we made a bet."}, {"start": 158.0, "end": 165.0, "text": " 4 kilobytes is so little, we can store only a fraction of a second of MP3 music in it."}, {"start": 165.0, "end": 169.0, "text": " Also, this is an empty file generated with Windows Word."}, {"start": 169.0, "end": 176.0, "text": " Apparently, in some software systems, the definition of nothing takes several times more than 4 kilobytes."}, {"start": 176.0, "end": 185.0, "text": " And, after a bit of experimentation, I was quite stunned by the results because the final result was less than 2 kilobytes,"}, {"start": 185.0, "end": 189.0, "text": " even if support for some rudimentary animations is added."}, {"start": 189.0, "end": 199.0, "text": " The whole computer program that executes volumetric path tracing with this acquiangular important sampling technique fits on your business card twice."}, {"start": 199.0, "end": 204.0, "text": " Absolute insanity. I've put a link discussing some details in the video description."}, {"start": 204.0, "end": 208.0, "text": " Now, don't be fooled by the simplicity of the presentation here."}, {"start": 208.0, "end": 216.0, "text": " The heart and soul of the algorithm that created the rocket launch scene in Manning Black 3 is the same as this one."}, {"start": 216.0, "end": 225.0, "text": " Due to legal reasons, it is not advisable to show it to you in this video, but this is fortunately one more excellent reason for you to have a look at the paper."}, {"start": 225.0, "end": 228.0, "text": " As always, the link is available in the video description."}, {"start": 228.0, "end": 234.0, "text": " This is my favorite kind of paper where there are remarkably large gains to be had,"}, {"start": 234.0, "end": 239.0, "text": " and it can be easily added to pretty much any light simulation program out there."}, {"start": 239.0, "end": 245.0, "text": " I often like to say that the value over complexity ratio is tending towards infinity."}, {"start": 245.0, "end": 247.0, "text": " This work is a prime example of that."}, {"start": 247.0, "end": 253.0, "text": " By the way, Marcos and his team recently won a technical Oscar award not only for this,"}, {"start": 253.0, "end": 259.0, "text": " but for their decades of hard work on their Arnold renderer, which is behind many, many Hollywood productions."}, {"start": 259.0, "end": 262.0, "text": " I've put a link to their website in the video description as well."}, {"start": 262.0, "end": 264.0, "text": " Have a look, congrats guys!"}, {"start": 264.0, "end": 293.0, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=-all65C-dh0
Fast Photorealistic Fur and Hair With Cone Tracing | Two Minute Papers #126
Our Twitter feed is available here: https://twitter.com/karoly_zsolnai The paper "Cone Tracing for Furry Object Rendering" is available here: http://gaps-zju.org/mlchai/resources/qin2014cone.pdf http://gaps-zju.org/mlchai/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-640498/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizona Ifeher. This is a cone-based ray tracing algorithm for rendering photorealistic images of hair and furry objects where the number of hair strands is typically over a hundred thousand. Okay, wait, what do these terms mean exactly? Ray tracing means a bona fide light simulation program where we follow the path of many millions of light rays between the light sources and our camera. This usually means that light reflections and refractions, lens blur and defocus are taken into consideration. This feature is also referred to as depth of field where DOF in short, as you can see in the video. A fully ray traced system like this for hair and fur leads to absolutely beautiful images that you can see throughout this footage. So what about the cone-based part? Earlier, we had an episode about voxel cone tracing which is an absolutely amazing technique to perform ray tracing in real time. It works by replacing these infinitely thin light rays with thicker cone-shaped rays which reduces the execution time of the algorithm significantly at the cost of mostly a minor sometimes even imperceptible degradation in image quality. Since the hair strands that we are trying to hit with the rays are extremely thin and cone tracing makes the rays thicker, extending this concept to rendering fur without non-trivial extensions is going to be a fruitless endeavor. The paper contains techniques to overcome this issue and an efficient data structure is proposed to store and find the individual hair strands and the way to intersect these cones with the fibers. The algorithm is also able to adapt the cone sizes to the scene we have at hand. The previous techniques typically took at least 20 to 30 minutes to render one image and with this efficient solution will be greeted by a photorealistic image at least 4 to 6 times quicker in less than 5 minutes while some examples were completed in less than a minute. I cannot get tired of seeing these tiny photorealistic furry animals in Pixar movies and I am super happy to see there is likely going to be much, much more of these. By the way, if you are subscribed to the channel, please click the little bell next to the subscription icon to make sure you never miss an episode. Also, you can follow us on Twitter for updates. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizona Ifeher."}, {"start": 4.36, "end": 15.32, "text": " This is a cone-based ray tracing algorithm for rendering photorealistic images of hair and furry objects where the number of hair strands is typically over a hundred thousand."}, {"start": 15.32, "end": 18.82, "text": " Okay, wait, what do these terms mean exactly?"}, {"start": 18.82, "end": 28.68, "text": " Ray tracing means a bona fide light simulation program where we follow the path of many millions of light rays between the light sources and our camera."}, {"start": 28.68, "end": 36.08, "text": " This usually means that light reflections and refractions, lens blur and defocus are taken into consideration."}, {"start": 36.08, "end": 42.0, "text": " This feature is also referred to as depth of field where DOF in short, as you can see in the video."}, {"start": 42.0, "end": 50.0, "text": " A fully ray traced system like this for hair and fur leads to absolutely beautiful images that you can see throughout this footage."}, {"start": 50.0, "end": 52.239999999999995, "text": " So what about the cone-based part?"}, {"start": 52.24, "end": 61.160000000000004, "text": " Earlier, we had an episode about voxel cone tracing which is an absolutely amazing technique to perform ray tracing in real time."}, {"start": 61.160000000000004, "end": 77.56, "text": " It works by replacing these infinitely thin light rays with thicker cone-shaped rays which reduces the execution time of the algorithm significantly at the cost of mostly a minor sometimes even imperceptible degradation in image quality."}, {"start": 77.56, "end": 90.8, "text": " Since the hair strands that we are trying to hit with the rays are extremely thin and cone tracing makes the rays thicker, extending this concept to rendering fur without non-trivial extensions is going to be a fruitless endeavor."}, {"start": 90.8, "end": 102.76, "text": " The paper contains techniques to overcome this issue and an efficient data structure is proposed to store and find the individual hair strands and the way to intersect these cones with the fibers."}, {"start": 102.76, "end": 107.84, "text": " The algorithm is also able to adapt the cone sizes to the scene we have at hand."}, {"start": 107.84, "end": 125.2, "text": " The previous techniques typically took at least 20 to 30 minutes to render one image and with this efficient solution will be greeted by a photorealistic image at least 4 to 6 times quicker in less than 5 minutes while some examples were completed in less than a minute."}, {"start": 125.2, "end": 135.2, "text": " I cannot get tired of seeing these tiny photorealistic furry animals in Pixar movies and I am super happy to see there is likely going to be much, much more of these."}, {"start": 135.2, "end": 142.72, "text": " By the way, if you are subscribed to the channel, please click the little bell next to the subscription icon to make sure you never miss an episode."}, {"start": 142.72, "end": 145.36, "text": " Also, you can follow us on Twitter for updates."}, {"start": 145.36, "end": 147.76, "text": " Thanks for watching and for your generous support."}, {"start": 147.76, "end": 165.76, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=vaFhLAbPi8w
Game AI Development With OpenAI Universe | Two Minute Papers #125
OpenAI Universe + blog post: https://openai.com/blog/universe/ https://universe.openai.com/ Also, make sure to check out Google DeepMind's lab: https://github.com/deepmind/lab https://deepmind.com/blog/open-sourcing-deepmind-lab/ For the record: no, I am not an Edge user. :) Terrain learning footage credit: http://www.cs.ubc.ca/~van/papers/2016-TOG-deepRL/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-821568/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ijona Ifehir. OpenAI's gym was a selection of gaming environments for reinforcement learning algorithms. This is a class of techniques that are able to learn and perform an optimal chain of actions in an environment. This environment could be playing video games, navigating a drone around, or teaching digital creatures to walk. In this system, people could create new reinforcement learning programs and decide who's AI is the best. Gym was a ton of fun, but have a look at this one. How is this different from gym? This new software platform, Universe, works not only for reinforcement learning algorithms, but for arbitrary programs, like a freestyle wrestling competition for AI researchers. The list of games include GTA V, Mirror's Edge, Starcraft II, Civilization V, Minecraft, Portal, and a lot more this time around. Super exciting! You can download this framework right now and proceed testing. One can also perform different browser tasks, such as booking a plane ticket or other endeavors that require navigating around in a web browser interface. Given the current software architecture for Universe, practically any task where automation makes sense can be included in the future. And we don't need to make any intrusive changes to the game itself. In fact, we don't even have to have access to the source code. This is huge, especially given that many of these games are proprietary software. So to make this happen, individual deals had taken place between the game development companies and open AI. When the company was founded by Elon Musk and Sam Altman, they have picked up so many of the most talented AI researchers around the globe and I am so happy to see that it really, really shows. There is an excellent blog post describing the details of the system, make sure to have a look. So, I reckon that our comment section is the absolute best I've seen on YouTube. Feel free to participate or start a discussion. There are always plenty of amazing ideas in the comment section. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ijona Ifehir."}, {"start": 4.5600000000000005, "end": 10.56, "text": " OpenAI's gym was a selection of gaming environments for reinforcement learning algorithms."}, {"start": 10.56, "end": 15.72, "text": " This is a class of techniques that are able to learn and perform an optimal chain of actions"}, {"start": 15.72, "end": 17.240000000000002, "text": " in an environment."}, {"start": 17.240000000000002, "end": 22.2, "text": " This environment could be playing video games, navigating a drone around, or teaching"}, {"start": 22.2, "end": 24.28, "text": " digital creatures to walk."}, {"start": 24.28, "end": 29.96, "text": " In this system, people could create new reinforcement learning programs and decide who's AI"}, {"start": 29.96, "end": 31.32, "text": " is the best."}, {"start": 31.32, "end": 34.480000000000004, "text": " Gym was a ton of fun, but have a look at this one."}, {"start": 34.480000000000004, "end": 36.32, "text": " How is this different from gym?"}, {"start": 36.32, "end": 42.08, "text": " This new software platform, Universe, works not only for reinforcement learning algorithms,"}, {"start": 42.08, "end": 47.96, "text": " but for arbitrary programs, like a freestyle wrestling competition for AI researchers."}, {"start": 47.96, "end": 55.400000000000006, "text": " The list of games include GTA V, Mirror's Edge, Starcraft II, Civilization V, Minecraft,"}, {"start": 55.400000000000006, "end": 58.16, "text": " Portal, and a lot more this time around."}, {"start": 58.16, "end": 59.24, "text": " Super exciting!"}, {"start": 59.24, "end": 63.400000000000006, "text": " You can download this framework right now and proceed testing."}, {"start": 63.400000000000006, "end": 68.84, "text": " One can also perform different browser tasks, such as booking a plane ticket or other endeavors"}, {"start": 68.84, "end": 72.68, "text": " that require navigating around in a web browser interface."}, {"start": 72.68, "end": 77.52000000000001, "text": " Given the current software architecture for Universe, practically any task where automation"}, {"start": 77.52000000000001, "end": 80.4, "text": " makes sense can be included in the future."}, {"start": 80.4, "end": 84.24000000000001, "text": " And we don't need to make any intrusive changes to the game itself."}, {"start": 84.24000000000001, "end": 88.08, "text": " In fact, we don't even have to have access to the source code."}, {"start": 88.08, "end": 92.92, "text": " This is huge, especially given that many of these games are proprietary software."}, {"start": 92.92, "end": 98.16, "text": " So to make this happen, individual deals had taken place between the game development companies"}, {"start": 98.16, "end": 99.48, "text": " and open AI."}, {"start": 99.48, "end": 103.8, "text": " When the company was founded by Elon Musk and Sam Altman, they have picked up so many"}, {"start": 103.8, "end": 108.56, "text": " of the most talented AI researchers around the globe and I am so happy to see that it"}, {"start": 108.56, "end": 110.56, "text": " really, really shows."}, {"start": 110.56, "end": 114.8, "text": " There is an excellent blog post describing the details of the system, make sure to have"}, {"start": 114.8, "end": 115.8, "text": " a look."}, {"start": 115.8, "end": 121.2, "text": " So, I reckon that our comment section is the absolute best I've seen on YouTube."}, {"start": 121.2, "end": 123.75999999999999, "text": " Feel free to participate or start a discussion."}, {"start": 123.75999999999999, "end": 127.47999999999999, "text": " There are always plenty of amazing ideas in the comment section."}, {"start": 127.47999999999999, "end": 129.84, "text": " Thanks for watching and for your generous support."}, {"start": 129.84, "end": 146.6, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=WovbLx8C0yA
Enhance! Super Resolution From Google | Two Minute Papers #124
The paper "RAISR: Rapid and Accurate Image Super Resolution" is available here: https://arxiv.org/abs/1606.01299 Additional supplementary materials: https://drive.google.com/file/d/0BzCe024Ewz8ab2RKUFVFZGJ4OWc/view Blog posts: https://research.googleblog.com/2016/11/enhance-raisr-sharp-images-with-machine.html https://www.blog.google/products/google-plus/saving-you-bandwidth-through-machine-learning/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Image credits: Super resolution - https://en.wikipedia.org/wiki/Super-resolution_imaging https://commons.wikimedia.org/wiki/File:An_example_of_super_resolution_with_still_RAW_photo..jpg Image inpainting - http://www.cs.toronto.edu/~mangas/teaching/320/assignments/a2/ http://cimg.eu/greycstoration/demonstration.shtml Thumbnail background image credit: https://pixabay.com/photo-1844081/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. What is super-resolution? Super-resolution is a process where our input is a course, low-resolution image, and the output is the same image, but now with more details and in high-resolution. We'll also refer to this process as image upscaling. And in this piece of work, we are interested in performing single-image super-resolution, which means that no additional data is presented to the algorithm that could help the process. Despite the incredible results seen, in practically any of the crime-solving television shows out there, our intuition would perhaps say that this problem for the first sight sounds impossible. How could one mathematically fill in the details when these details are completely unknown? Well, that's only kind of true. That's not confused super-resolution with image-in-painting, where we essentially cut an entire part out of an image and try to replace it leaning on our knowledge of the surroundings of the missing part. That's a different problem. Here, the entirety of the image is known, and the details require some enhancing. And this particular method is not based on neural networks, but is still a learning-based technique. The cool thing here is that we can use a training data set that is, for all intents and purposes, arbitrarily large. We can just grab a high-resolution image, convert it to a lower resolution, and we immediately have our hands on a training example for the learning algorithm. These would be the before and after images, if you will. And here, during learning, the image is subdivided into small image patches, and buckets are created to aggregate the information between patches that share similar features. These features include brightness, textures, and the orientation of the edges. The technique looks at how the small and large resolution images relate to each other when viewed through the lens of these features. Two, remarkably interesting things arose from this experiment. One, it outperforms existing neural network-based techniques, and two, it only uses 10,000 images and one hour of training time, which is, in the world of deep neural networks, is so little it's completely unheard of. Insanity, really, really well done. Some tricks are involved to keep the memory consumption low, the paper discusses how it is done, and there are also plenty of other details within, make sure to have a look, as always, it is linked in the video description. It can either be run directly on the low-resolution image, or alternatively, we can first run a cheap and naive, decade-old upscaling algorithm, and run this technique on the upscaled output to improve it. Note that super-resolution is a remarkably competitive field of research, there are hundreds and hundreds of papers appearing on this every year, and almost every single one of them seems to be miles ahead of the previous ones. Where, in reality, the truth is that most of these methods have different weaknesses and strengths, and so far, I haven't seen any technique that would be viable for universal use. To make sure that a large number of cases is covered, the authors posted a sizable supplementary document with comparisons. This gives so much more credence to the results. I am hoping to see a more widespread adoption of this in future papers in this area. For now, when viewing websites, I feel that we are close to the point where we could choose to transmit only the lower resolution images through the network, and perform super-resolution on them locally on our phones and computers. This will lead to significant savings on network bandwidth. We are living amazing times indeed. If you are enjoying the series, make sure to subscribe to the channel, or you can also pick up really cool perks on our Patreon page through this icon with the letter P. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir."}, {"start": 4.76, "end": 6.76, "text": " What is super-resolution?"}, {"start": 6.76, "end": 13.4, "text": " Super-resolution is a process where our input is a course, low-resolution image, and the output"}, {"start": 13.4, "end": 17.88, "text": " is the same image, but now with more details and in high-resolution."}, {"start": 17.88, "end": 21.48, "text": " We'll also refer to this process as image upscaling."}, {"start": 21.48, "end": 26.8, "text": " And in this piece of work, we are interested in performing single-image super-resolution,"}, {"start": 26.8, "end": 32.32, "text": " which means that no additional data is presented to the algorithm that could help the process."}, {"start": 32.32, "end": 37.56, "text": " Despite the incredible results seen, in practically any of the crime-solving television shows"}, {"start": 37.56, "end": 43.08, "text": " out there, our intuition would perhaps say that this problem for the first sight sounds"}, {"start": 43.08, "end": 44.08, "text": " impossible."}, {"start": 44.08, "end": 49.84, "text": " How could one mathematically fill in the details when these details are completely unknown?"}, {"start": 49.84, "end": 52.519999999999996, "text": " Well, that's only kind of true."}, {"start": 52.52, "end": 57.88, "text": " That's not confused super-resolution with image-in-painting, where we essentially cut an entire"}, {"start": 57.88, "end": 63.160000000000004, "text": " part out of an image and try to replace it leaning on our knowledge of the surroundings"}, {"start": 63.160000000000004, "end": 64.76, "text": " of the missing part."}, {"start": 64.76, "end": 66.24000000000001, "text": " That's a different problem."}, {"start": 66.24000000000001, "end": 71.84, "text": " Here, the entirety of the image is known, and the details require some enhancing."}, {"start": 71.84, "end": 76.64, "text": " And this particular method is not based on neural networks, but is still a learning-based"}, {"start": 76.64, "end": 77.64, "text": " technique."}, {"start": 77.64, "end": 83.76, "text": " The cool thing here is that we can use a training data set that is, for all intents and purposes,"}, {"start": 83.76, "end": 85.16, "text": " arbitrarily large."}, {"start": 85.16, "end": 90.56, "text": " We can just grab a high-resolution image, convert it to a lower resolution, and we immediately"}, {"start": 90.56, "end": 94.28, "text": " have our hands on a training example for the learning algorithm."}, {"start": 94.28, "end": 97.72, "text": " These would be the before and after images, if you will."}, {"start": 97.72, "end": 104.16, "text": " And here, during learning, the image is subdivided into small image patches, and buckets are"}, {"start": 104.16, "end": 109.84, "text": " created to aggregate the information between patches that share similar features."}, {"start": 109.84, "end": 114.75999999999999, "text": " These features include brightness, textures, and the orientation of the edges."}, {"start": 114.75999999999999, "end": 119.6, "text": " The technique looks at how the small and large resolution images relate to each other"}, {"start": 119.6, "end": 122.52, "text": " when viewed through the lens of these features."}, {"start": 122.52, "end": 126.84, "text": " Two, remarkably interesting things arose from this experiment."}, {"start": 126.84, "end": 133.92, "text": " One, it outperforms existing neural network-based techniques, and two, it only uses 10,000"}, {"start": 133.92, "end": 139.48, "text": " images and one hour of training time, which is, in the world of deep neural networks,"}, {"start": 139.48, "end": 143.0, "text": " is so little it's completely unheard of."}, {"start": 143.0, "end": 145.76, "text": " Insanity, really, really well done."}, {"start": 145.76, "end": 150.16, "text": " Some tricks are involved to keep the memory consumption low, the paper discusses how it"}, {"start": 150.16, "end": 154.76, "text": " is done, and there are also plenty of other details within, make sure to have a look,"}, {"start": 154.76, "end": 157.27999999999997, "text": " as always, it is linked in the video description."}, {"start": 157.27999999999997, "end": 162.88, "text": " It can either be run directly on the low-resolution image, or alternatively, we can first run a"}, {"start": 162.88, "end": 168.79999999999998, "text": " cheap and naive, decade-old upscaling algorithm, and run this technique on the upscaled output"}, {"start": 168.79999999999998, "end": 170.2, "text": " to improve it."}, {"start": 170.2, "end": 175.12, "text": " Note that super-resolution is a remarkably competitive field of research, there are hundreds"}, {"start": 175.12, "end": 180.24, "text": " and hundreds of papers appearing on this every year, and almost every single one of them"}, {"start": 180.24, "end": 183.4, "text": " seems to be miles ahead of the previous ones."}, {"start": 183.4, "end": 188.16, "text": " Where, in reality, the truth is that most of these methods have different weaknesses"}, {"start": 188.16, "end": 193.32, "text": " and strengths, and so far, I haven't seen any technique that would be viable for universal"}, {"start": 193.32, "end": 194.32, "text": " use."}, {"start": 194.32, "end": 199.68, "text": " To make sure that a large number of cases is covered, the authors posted a sizable supplementary"}, {"start": 199.68, "end": 201.51999999999998, "text": " document with comparisons."}, {"start": 201.51999999999998, "end": 204.24, "text": " This gives so much more credence to the results."}, {"start": 204.24, "end": 209.64, "text": " I am hoping to see a more widespread adoption of this in future papers in this area."}, {"start": 209.64, "end": 214.76, "text": " For now, when viewing websites, I feel that we are close to the point where we could choose"}, {"start": 214.76, "end": 220.39999999999998, "text": " to transmit only the lower resolution images through the network, and perform super-resolution"}, {"start": 220.39999999999998, "end": 223.56, "text": " on them locally on our phones and computers."}, {"start": 223.56, "end": 226.92, "text": " This will lead to significant savings on network bandwidth."}, {"start": 226.92, "end": 229.35999999999999, "text": " We are living amazing times indeed."}, {"start": 229.35999999999999, "end": 233.35999999999999, "text": " If you are enjoying the series, make sure to subscribe to the channel, or you can also"}, {"start": 233.35999999999999, "end": 238.39999999999998, "text": " pick up really cool perks on our Patreon page through this icon with the letter P."}, {"start": 238.4, "end": 248.4, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=Yd4blFeRTEw
Large-Scale Fluid Simulations On Your Graphics Card | Two Minute Papers #123
The paper "A scalable Schur-complement fluids solver for heterogeneous compute platforms" is available here: http://graphics.cs.wisc.edu/Papers/2016/LMAS16/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit: https://pixabay.com/photo-1913559/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karajona Ifeher. The better fluid simulation techniques out there typically run on our graphical cards, which if we formulate the problem in a way that the many compute units within can do in parallel if they row in unison if you will, will be greeted by an incredible bump in the speed of the simulation. This leads to amazingly detailed simulations, many of which you'll see in this footage. It's going to be really good. However, sometimes we have a simulation domain that is so large, it simply cannot be loaded into the memory of our graphical card. What about those problems? Well, the solution could be subdividing the problem into independent subdomains and solving them separately on multiple devices. Slice the problem up into smaller, more manageable pieces. Divide and conquer. But wait, we would just be pretending that these subdomains are independent because in reality they are clearly not, because there is a large amount of fluid flowing between them and it takes quite a bit of algebraic wizardry to make sure that the information exchange between these devices happens correctly and on time. But if we do it correctly, we can see our reward on the screen. Let's marvel at it together. Oh, yeah! I cannot get tired of this. Typically, the simulation in the individual subdomains are computed on one or more separate graphical cards and the administration of the intersecting interface takes place on the processor. The challenge of such a solution is that one has to be able to show that the solution of this problem formulation is equivalent to solving the huge original problem and it also has to be significantly more efficient to be useful for projects in the industry. The paper is one of the finest pieces of craftsmanship I've seen lately. The link is available in the video description. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 5.6000000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karajona Ifeher."}, {"start": 5.6000000000000005, "end": 10.6, "text": " The better fluid simulation techniques out there typically run on our graphical cards,"}, {"start": 10.6, "end": 16.2, "text": " which if we formulate the problem in a way that the many compute units within can do in parallel"}, {"start": 16.2, "end": 21.8, "text": " if they row in unison if you will, will be greeted by an incredible bump in the speed"}, {"start": 21.8, "end": 23.0, "text": " of the simulation."}, {"start": 23.0, "end": 27.8, "text": " This leads to amazingly detailed simulations, many of which you'll see in this footage."}, {"start": 27.8, "end": 29.8, "text": " It's going to be really good."}, {"start": 29.8, "end": 34.0, "text": " However, sometimes we have a simulation domain that is so large,"}, {"start": 34.0, "end": 38.0, "text": " it simply cannot be loaded into the memory of our graphical card."}, {"start": 38.0, "end": 39.6, "text": " What about those problems?"}, {"start": 39.6, "end": 44.6, "text": " Well, the solution could be subdividing the problem into independent subdomains"}, {"start": 44.6, "end": 47.8, "text": " and solving them separately on multiple devices."}, {"start": 47.8, "end": 51.6, "text": " Slice the problem up into smaller, more manageable pieces."}, {"start": 51.6, "end": 53.400000000000006, "text": " Divide and conquer."}, {"start": 53.400000000000006, "end": 58.0, "text": " But wait, we would just be pretending that these subdomains are independent"}, {"start": 58.0, "end": 63.4, "text": " because in reality they are clearly not, because there is a large amount of fluid flowing between them"}, {"start": 63.4, "end": 68.4, "text": " and it takes quite a bit of algebraic wizardry to make sure that the information exchange"}, {"start": 68.4, "end": 72.0, "text": " between these devices happens correctly and on time."}, {"start": 72.0, "end": 75.6, "text": " But if we do it correctly, we can see our reward on the screen."}, {"start": 75.6, "end": 77.6, "text": " Let's marvel at it together."}, {"start": 85.4, "end": 87.8, "text": " Oh, yeah!"}, {"start": 87.8, "end": 90.0, "text": " I cannot get tired of this."}, {"start": 90.0, "end": 94.6, "text": " Typically, the simulation in the individual subdomains are computed on one"}, {"start": 94.6, "end": 99.39999999999999, "text": " or more separate graphical cards and the administration of the intersecting interface"}, {"start": 99.39999999999999, "end": 101.39999999999999, "text": " takes place on the processor."}, {"start": 101.39999999999999, "end": 105.4, "text": " The challenge of such a solution is that one has to be able to show"}, {"start": 105.4, "end": 111.4, "text": " that the solution of this problem formulation is equivalent to solving the huge original problem"}, {"start": 111.4, "end": 117.0, "text": " and it also has to be significantly more efficient to be useful for projects in the industry."}, {"start": 117.0, "end": 121.4, "text": " The paper is one of the finest pieces of craftsmanship I've seen lately."}, {"start": 121.4, "end": 123.8, "text": " The link is available in the video description."}, {"start": 123.8, "end": 126.2, "text": " Thanks for watching and for your generous support."}, {"start": 126.2, "end": 147.4, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HO1LYJb818Q
AI Makes 3D Models From Photos | Two Minute Papers #122
The paper "Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling" and its source code is available here: http://3dgan.csail.mit.edu/ https://arxiv.org/pdf/1610.07584v2.pdf More about generative adversarial networks (and some explanations): Image Editing with Generative Adversarial Networks - https://www.youtube.com/watch?v=pqkpIfu36Os Image Synthesis From Text With Deep Learning - https://www.youtube.com/watch?v=rAbhypxs1qQ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail image background credit: https://pixabay.com/photo-881120/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zolnai-Fehir. What if we try to build a generative adversarial network for 3D data? This means that this network would work not on the usual two-dimensional images, but instead on three-dimensional shapes. So, the generator network generates a bunch of different three-dimensional shapes, and the basic question for the discriminator network would be, are these 3D shapes real or synthetic? The main use case of this technique can be, and now, watch closely, taking a photograph from a piece of furniture and automatically getting a digital 3D model of it. Now, it is clear for both of us that this is still a course, low-resolution model, but it is incredible to see how a machine can get a rudimentary understanding of 3D geometry in the presence of occlusions, lighting, and different camera angles. That's a stunning milestone indeed. It also supports interpolation between two shapes, which means that we consider the presumably empty space between the shapes as a continuum, and imagine new shapes that are closer to either one or the other. We can do this kind of interpolation, for instance, between two chair models. But the exciting thing is that no one said it has to be two objects of the same class, so we can go even crazier and interpolate between a car and a boat. Since the technique works on a low-dimensional representation of these shapes, we can also perform these crazy algebraic operations between them that follow some sort of intuition. We can add two chairs together, or subtract different kinds of tables from each other. Absolute madness. And one of the most remarkable things about the paper is that the learning took place on a very limited amount of data, not more than 25 training examples per class. One class we can imagine is one object type, such as chairs, tables, or cars. The authors made the source code and the pre-trained network available on their website. The link is in the video description. Make sure to have a look. I am so happy to see breakthroughs like this in machine learning research, one after another in quick succession. This work is surely going to spark a lot of follow-up papers and will soon find ourselves getting extremely high quality 3D models from photographs. Also, imagine combining this with a 3D printer. You take a photograph of something, run this algorithm on it, and then print a copy of that furniture and appliance for yourself. We are living amazing times indeed. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zolnai-Fehir."}, {"start": 5.0, "end": 10.200000000000001, "text": " What if we try to build a generative adversarial network for 3D data?"}, {"start": 10.200000000000001, "end": 15.72, "text": " This means that this network would work not on the usual two-dimensional images, but instead"}, {"start": 15.72, "end": 17.68, "text": " on three-dimensional shapes."}, {"start": 17.68, "end": 23.28, "text": " So, the generator network generates a bunch of different three-dimensional shapes, and"}, {"start": 23.28, "end": 28.78, "text": " the basic question for the discriminator network would be, are these 3D shapes real or"}, {"start": 28.78, "end": 30.26, "text": " synthetic?"}, {"start": 30.26, "end": 35.56, "text": " The main use case of this technique can be, and now, watch closely, taking a photograph"}, {"start": 35.56, "end": 41.1, "text": " from a piece of furniture and automatically getting a digital 3D model of it."}, {"start": 41.1, "end": 46.28, "text": " Now, it is clear for both of us that this is still a course, low-resolution model, but"}, {"start": 46.28, "end": 52.34, "text": " it is incredible to see how a machine can get a rudimentary understanding of 3D geometry"}, {"start": 52.34, "end": 57.34, "text": " in the presence of occlusions, lighting, and different camera angles."}, {"start": 57.34, "end": 59.38, "text": " That's a stunning milestone indeed."}, {"start": 59.38, "end": 65.12, "text": " It also supports interpolation between two shapes, which means that we consider the presumably"}, {"start": 65.12, "end": 70.48, "text": " empty space between the shapes as a continuum, and imagine new shapes that are closer to"}, {"start": 70.48, "end": 72.46000000000001, "text": " either one or the other."}, {"start": 72.46000000000001, "end": 77.54, "text": " We can do this kind of interpolation, for instance, between two chair models."}, {"start": 77.54, "end": 83.26, "text": " But the exciting thing is that no one said it has to be two objects of the same class,"}, {"start": 83.26, "end": 88.54, "text": " so we can go even crazier and interpolate between a car and a boat."}, {"start": 88.54, "end": 93.14, "text": " Since the technique works on a low-dimensional representation of these shapes, we can also"}, {"start": 93.14, "end": 99.34, "text": " perform these crazy algebraic operations between them that follow some sort of intuition."}, {"start": 99.34, "end": 105.34, "text": " We can add two chairs together, or subtract different kinds of tables from each other."}, {"start": 105.34, "end": 106.54, "text": " Absolute madness."}, {"start": 106.54, "end": 110.82000000000001, "text": " And one of the most remarkable things about the paper is that the learning took place"}, {"start": 110.82, "end": 116.77999999999999, "text": " on a very limited amount of data, not more than 25 training examples per class."}, {"start": 116.77999999999999, "end": 122.61999999999999, "text": " One class we can imagine is one object type, such as chairs, tables, or cars."}, {"start": 122.61999999999999, "end": 127.41999999999999, "text": " The authors made the source code and the pre-trained network available on their website."}, {"start": 127.41999999999999, "end": 129.1, "text": " The link is in the video description."}, {"start": 129.1, "end": 130.29999999999998, "text": " Make sure to have a look."}, {"start": 130.29999999999998, "end": 136.1, "text": " I am so happy to see breakthroughs like this in machine learning research, one after another"}, {"start": 136.1, "end": 137.7, "text": " in quick succession."}, {"start": 137.7, "end": 143.17999999999998, "text": " This work is surely going to spark a lot of follow-up papers and will soon find ourselves"}, {"start": 143.17999999999998, "end": 147.33999999999997, "text": " getting extremely high quality 3D models from photographs."}, {"start": 147.33999999999997, "end": 150.7, "text": " Also, imagine combining this with a 3D printer."}, {"start": 150.7, "end": 155.1, "text": " You take a photograph of something, run this algorithm on it, and then print a copy"}, {"start": 155.1, "end": 158.17999999999998, "text": " of that furniture and appliance for yourself."}, {"start": 158.17999999999998, "end": 160.66, "text": " We are living amazing times indeed."}, {"start": 160.66, "end": 170.66, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MtWtY4DdiWs
Text Style Transfer | Two Minute Papers #121
The paper "Awesome Typography: Statistics-Based Text Effects Transfer" is available here: https://arxiv.org/abs/1611.09026 Recommended for you: Artistic Style Transfer For Videos - https://www.youtube.com/watch?v=Uxax5EKg0zA WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail image credit: https://pixabay.com/photo-851328/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. Before we start, it is important to emphasize that this paper is not using neural networks. Not so long ago, in 2015, the news took the world by storm. Researchers were able to create a novel neural network-based technique for artistic style transfer, which had quickly become a small subfield of its own within machine learning. The problem definition was the following. We provide an input image and a source photograph, and the goal is to extract the artistic style of this photo and apply it to our image. The results were absolutely stunning, but at the same time, it was difficult to control the outcome. Later, this technique was generalized for higher resolution images, and instead of waiting for hours, it now works in almost real time and is used in several commercial products. Wow, it is rare to see a new piece of technology introduced to the market so quickly. Really cool. However, this piece of work showcases a handcrafted algorithm that only works on a specialized case of inputs, text-based effects, but in this domain, it smokes the competition. And here, the style transfer happens not with any kind of neural network or other popular learning algorithm, but in terms of statistics. In this formulation, we know about the source text as well, and because of that, we know exactly the kind of effects that are applied to it, kind of like a before and after image for some beauty product, if you will. This opens up the possibility of analyzing its statistical properties and applying a similar effect to practically any kind of text input. The term statistically means that we are not interested in one isolated case, but we describe general rules, namely, in what distance from the text, what is likely to happen to it. The resulting technique is remarkably robust and works on a variety of input output pairs and is head and shoulders beyond the competition, including the state of the art neural network-based techniques. That is indeed quite remarkable. I expect graphic designers to be all over this technique in the very near future. This is an excellent, really well written paper, and the evaluation is also of high quality. If you wish to see how one can do this kind of magic by hand without resorting to neural networks, don't miss out on this one and make sure to have a look. There's also a possibility of having a small degree of artistic control over the outputs, and who knows, some variant of this could open up the possibility of a fully animated style transfer from one image. Wow! And before we go, we'd like to send a huge shout out to our fellow scholars who contributed translations to our episodes for a variety of languages. Please note that the names of the contributors are always available in the video description. It is really great to see how the series is becoming more and more available for people around the globe. If you wish to contribute, click on the cogwheel icon in the lower right and the subtitle slash cc text. Thank you so much. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.68, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.68, "end": 10.48, "text": " Before we start, it is important to emphasize that this paper is not using neural networks."}, {"start": 10.48, "end": 15.32, "text": " Not so long ago, in 2015, the news took the world by storm."}, {"start": 15.32, "end": 19.66, "text": " Researchers were able to create a novel neural network-based technique for artistic style"}, {"start": 19.66, "end": 24.44, "text": " transfer, which had quickly become a small subfield of its own within machine learning."}, {"start": 24.44, "end": 26.52, "text": " The problem definition was the following."}, {"start": 26.52, "end": 31.96, "text": " We provide an input image and a source photograph, and the goal is to extract the artistic style"}, {"start": 31.96, "end": 34.879999999999995, "text": " of this photo and apply it to our image."}, {"start": 34.879999999999995, "end": 39.72, "text": " The results were absolutely stunning, but at the same time, it was difficult to control"}, {"start": 39.72, "end": 40.72, "text": " the outcome."}, {"start": 40.72, "end": 45.54, "text": " Later, this technique was generalized for higher resolution images, and instead of waiting"}, {"start": 45.54, "end": 52.08, "text": " for hours, it now works in almost real time and is used in several commercial products."}, {"start": 52.08, "end": 57.8, "text": " Wow, it is rare to see a new piece of technology introduced to the market so quickly."}, {"start": 57.8, "end": 58.8, "text": " Really cool."}, {"start": 58.8, "end": 64.38, "text": " However, this piece of work showcases a handcrafted algorithm that only works on a specialized"}, {"start": 64.38, "end": 69.8, "text": " case of inputs, text-based effects, but in this domain, it smokes the competition."}, {"start": 69.8, "end": 74.52, "text": " And here, the style transfer happens not with any kind of neural network or other popular"}, {"start": 74.52, "end": 78.16, "text": " learning algorithm, but in terms of statistics."}, {"start": 78.16, "end": 82.6, "text": " In this formulation, we know about the source text as well, and because of that, we know"}, {"start": 82.6, "end": 87.96, "text": " exactly the kind of effects that are applied to it, kind of like a before and after image"}, {"start": 87.96, "end": 90.12, "text": " for some beauty product, if you will."}, {"start": 90.12, "end": 95.4, "text": " This opens up the possibility of analyzing its statistical properties and applying a similar"}, {"start": 95.4, "end": 98.72, "text": " effect to practically any kind of text input."}, {"start": 98.72, "end": 104.24, "text": " The term statistically means that we are not interested in one isolated case, but we describe"}, {"start": 104.24, "end": 110.19999999999999, "text": " general rules, namely, in what distance from the text, what is likely to happen to it."}, {"start": 110.19999999999999, "end": 115.6, "text": " The resulting technique is remarkably robust and works on a variety of input output pairs"}, {"start": 115.6, "end": 121.56, "text": " and is head and shoulders beyond the competition, including the state of the art neural network-based"}, {"start": 121.56, "end": 122.56, "text": " techniques."}, {"start": 122.56, "end": 124.75999999999999, "text": " That is indeed quite remarkable."}, {"start": 124.75999999999999, "end": 129.35999999999999, "text": " I expect graphic designers to be all over this technique in the very near future."}, {"start": 129.36, "end": 135.24, "text": " This is an excellent, really well written paper, and the evaluation is also of high quality."}, {"start": 135.24, "end": 140.28, "text": " If you wish to see how one can do this kind of magic by hand without resorting to neural"}, {"start": 140.28, "end": 143.92000000000002, "text": " networks, don't miss out on this one and make sure to have a look."}, {"start": 143.92000000000002, "end": 148.84, "text": " There's also a possibility of having a small degree of artistic control over the outputs,"}, {"start": 148.84, "end": 154.12, "text": " and who knows, some variant of this could open up the possibility of a fully animated style"}, {"start": 154.12, "end": 156.36, "text": " transfer from one image."}, {"start": 156.36, "end": 157.44000000000003, "text": " Wow!"}, {"start": 157.44, "end": 161.92, "text": " And before we go, we'd like to send a huge shout out to our fellow scholars who contributed"}, {"start": 161.92, "end": 165.52, "text": " translations to our episodes for a variety of languages."}, {"start": 165.52, "end": 170.12, "text": " Please note that the names of the contributors are always available in the video description."}, {"start": 170.12, "end": 174.76, "text": " It is really great to see how the series is becoming more and more available for people"}, {"start": 174.76, "end": 176.0, "text": " around the globe."}, {"start": 176.0, "end": 180.56, "text": " If you wish to contribute, click on the cogwheel icon in the lower right and the subtitle"}, {"start": 180.56, "end": 182.28, "text": " slash cc text."}, {"start": 182.28, "end": 183.28, "text": " Thank you so much."}, {"start": 183.28, "end": 187.4, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=oitGRdHFNWw
Deep Learning Program Hallucinates Videos | Two Minute Papers #120
The paper "Generating Videos with Scene Dynamics" and its source code, and a pre-trained network is available here: http://web.mit.edu/vondrick/tinyvideo/ Recommended for you: Image Synthesis From Text With Deep Learning - https://www.youtube.com/watch?v=rAbhypxs1qQ What is an Autoencoder? - https://www.youtube.com/watch?v=Rdpbnd0pCiI Hallucinating Images With Deep Learning - https://www.youtube.com/watch?v=hnT-P3aALVE WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail image credit: https://pixabay.com/photo-1751455/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. Ever thought about the fact that we have a stupendously large amount of unlabeled videos on the internet? And, of course, with the ascendancy of machine learning algorithms that can learn by themselves, it would be a huge missed opportunity to not make use of all this free data. This is a crazy piece of work where the idea is to unleash a neural network on a large number of publicly uploaded videos on the internet and see how well it does if we ask it to generate new videos from scratch. Here, unlabeled means there is no information as to what we see in these videos, they are just provided as is. Machine learning methods that work on this kind of data, we like to call unsupervised learning techniques. This work is based on a generative adversarial network. Wait, what does this mean exactly? This means that we have two neural networks that raise each other, where one tries to generate more and more real looking animations and passes it over to the other that learns to tell real footage from fake ones. The first we call the Generator Network and the second is the Discriminator Network. They try to outperform each other and this rivalry goes on for quite a while and improves the quality of output for both neural networks, hence the name Generative Adversarial Networks. At first we have covered this concept that was used to generate images from written text descriptions. The shortcoming of this approach was the slow training time that led to extremely tiny low resolution output images. This was remedied by a follow-up work which proposed a two-stage version of this architecture. We have covered this in an earlier two-minute paper set-by-side, as always the link is available in the video description. It would not be an understatement to say that I nearly fell off the chair when seeing these incredible results. So where do we go from here? What shall be the next step? Well, of course, video. However, the implementation of such a technique is far from trivial. In this piece of work, the Generator Network learns not only the original representation of the videos, but on the foreground and background video streams separately and it also has to learn what combination of these yields realistic footage. This two-stree architecture is particularly useful in modeling real-world videos where the background is mostly stationary and there is an animated movement in the foreground. A train passing the station or people playing golf on the field are excellent examples of this kind of separation. We definitely need a high-quality discriminator network as well, as in the final synthesized footage not only the foreground and background must go well together, but the synthesized animations also have to be believable for human beings. This human being, in our case, is represented by the discriminator network. Needless to say, this problem is extremely difficult and the quality of the discriminator network makes or breaks this magic trick. And of course, the all-important question immediately arises if there are multiple algorithms performing this action, how do we decide which one is the best? Generally, we get a few people and show them a piece of synthesized footage with this algorithm and previous works and have them decide which they deemed more realistic. This is still the first step. I expect these techniques to improve so rapidly that we'll soon find ourselves testing against real-world footage and who knows, sometimes, perhaps, failing to recognize which is which. The results in the paper show that this new technique beats the previous techniques by a significant margin and that users have a strong preference towards the two-stree architecture. The previous technique they compare against is an auto-ank order which we have discussed in a previous two-minute paper episode, check it out, it is available in the video description. The disadvantages of this approach are quite easy to identify this time around. We have a very limited resolution for these output video streams that is 64 x 64 pixels for 32 frames, which even at a modest frame rate is just slightly over one second of footage. The synthesized results vary greatly in quality, but it's remarkable to see that the machine can have a rough understanding of the concept of a large variety of movement and animation types. It is really incredible to see that the neural network learns about the representations of these objects and how they move even when it wasn't explicitly instructed to do so. We can also visualize what the neural network has learned. This is done by finding different image inputs that make a particular neuron extremely excited. Here, we see a collection of inputs including these activations for images of people and trains. The author's website is definitely worthy of checking out as some of the sub-menus are quite ample in results. Some amazing, some, well, a bit horrifying, but what is sure is that all of them are quite interesting. And before we go, a huge shoutout to Las Lo Chambes, who helped us quite a bit in sorting out a number of technical issues with the series. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 5.0, "end": 9.84, "text": " Ever thought about the fact that we have a stupendously large amount of unlabeled videos"}, {"start": 9.84, "end": 10.84, "text": " on the internet?"}, {"start": 10.84, "end": 16.76, "text": " And, of course, with the ascendancy of machine learning algorithms that can learn by themselves,"}, {"start": 16.76, "end": 21.56, "text": " it would be a huge missed opportunity to not make use of all this free data."}, {"start": 21.56, "end": 27.28, "text": " This is a crazy piece of work where the idea is to unleash a neural network on a large number"}, {"start": 27.28, "end": 33.44, "text": " of publicly uploaded videos on the internet and see how well it does if we ask it to generate"}, {"start": 33.44, "end": 35.400000000000006, "text": " new videos from scratch."}, {"start": 35.400000000000006, "end": 40.8, "text": " Here, unlabeled means there is no information as to what we see in these videos, they are"}, {"start": 40.8, "end": 42.96, "text": " just provided as is."}, {"start": 42.96, "end": 47.64, "text": " Machine learning methods that work on this kind of data, we like to call unsupervised"}, {"start": 47.64, "end": 49.040000000000006, "text": " learning techniques."}, {"start": 49.040000000000006, "end": 52.760000000000005, "text": " This work is based on a generative adversarial network."}, {"start": 52.760000000000005, "end": 55.24, "text": " Wait, what does this mean exactly?"}, {"start": 55.24, "end": 59.800000000000004, "text": " This means that we have two neural networks that raise each other, where one tries to generate"}, {"start": 59.800000000000004, "end": 65.56, "text": " more and more real looking animations and passes it over to the other that learns to tell"}, {"start": 65.56, "end": 67.96000000000001, "text": " real footage from fake ones."}, {"start": 67.96000000000001, "end": 73.0, "text": " The first we call the Generator Network and the second is the Discriminator Network."}, {"start": 73.0, "end": 78.16, "text": " They try to outperform each other and this rivalry goes on for quite a while and improves"}, {"start": 78.16, "end": 84.72, "text": " the quality of output for both neural networks, hence the name Generative Adversarial Networks."}, {"start": 84.72, "end": 89.88, "text": " At first we have covered this concept that was used to generate images from written text"}, {"start": 89.88, "end": 90.96, "text": " descriptions."}, {"start": 90.96, "end": 96.08, "text": " The shortcoming of this approach was the slow training time that led to extremely tiny"}, {"start": 96.08, "end": 98.16, "text": " low resolution output images."}, {"start": 98.16, "end": 103.56, "text": " This was remedied by a follow-up work which proposed a two-stage version of this architecture."}, {"start": 103.56, "end": 107.68, "text": " We have covered this in an earlier two-minute paper set-by-side, as always the link is"}, {"start": 107.68, "end": 109.68, "text": " available in the video description."}, {"start": 109.68, "end": 114.0, "text": " It would not be an understatement to say that I nearly fell off the chair when seeing"}, {"start": 114.0, "end": 116.12, "text": " these incredible results."}, {"start": 116.12, "end": 117.92, "text": " So where do we go from here?"}, {"start": 117.92, "end": 119.72, "text": " What shall be the next step?"}, {"start": 119.72, "end": 121.96, "text": " Well, of course, video."}, {"start": 121.96, "end": 126.32, "text": " However, the implementation of such a technique is far from trivial."}, {"start": 126.32, "end": 130.92000000000002, "text": " In this piece of work, the Generator Network learns not only the original representation"}, {"start": 130.92000000000002, "end": 136.92000000000002, "text": " of the videos, but on the foreground and background video streams separately and it also has"}, {"start": 136.92000000000002, "end": 141.32, "text": " to learn what combination of these yields realistic footage."}, {"start": 141.32, "end": 146.07999999999998, "text": " This two-stree architecture is particularly useful in modeling real-world videos where"}, {"start": 146.07999999999998, "end": 151.32, "text": " the background is mostly stationary and there is an animated movement in the foreground."}, {"start": 151.32, "end": 156.48, "text": " A train passing the station or people playing golf on the field are excellent examples"}, {"start": 156.48, "end": 158.44, "text": " of this kind of separation."}, {"start": 158.44, "end": 163.48, "text": " We definitely need a high-quality discriminator network as well, as in the final synthesized"}, {"start": 163.48, "end": 169.6, "text": " footage not only the foreground and background must go well together, but the synthesized animations"}, {"start": 169.6, "end": 172.68, "text": " also have to be believable for human beings."}, {"start": 172.68, "end": 177.32, "text": " This human being, in our case, is represented by the discriminator network."}, {"start": 177.32, "end": 181.95999999999998, "text": " Needless to say, this problem is extremely difficult and the quality of the discriminator"}, {"start": 181.95999999999998, "end": 184.79999999999998, "text": " network makes or breaks this magic trick."}, {"start": 184.79999999999998, "end": 189.92, "text": " And of course, the all-important question immediately arises if there are multiple algorithms"}, {"start": 189.92, "end": 194.28, "text": " performing this action, how do we decide which one is the best?"}, {"start": 194.28, "end": 198.88, "text": " Generally, we get a few people and show them a piece of synthesized footage with this"}, {"start": 198.88, "end": 204.51999999999998, "text": " algorithm and previous works and have them decide which they deemed more realistic."}, {"start": 204.51999999999998, "end": 206.32, "text": " This is still the first step."}, {"start": 206.32, "end": 211.48, "text": " I expect these techniques to improve so rapidly that we'll soon find ourselves testing"}, {"start": 211.48, "end": 216.76, "text": " against real-world footage and who knows, sometimes, perhaps, failing to recognize which"}, {"start": 216.76, "end": 217.8, "text": " is which."}, {"start": 217.8, "end": 222.16, "text": " The results in the paper show that this new technique beats the previous techniques by"}, {"start": 222.16, "end": 226.76, "text": " a significant margin and that users have a strong preference towards the two-stree"}, {"start": 226.76, "end": 227.76, "text": " architecture."}, {"start": 227.76, "end": 231.79999999999998, "text": " The previous technique they compare against is an auto-ank order which we have discussed"}, {"start": 231.79999999999998, "end": 237.0, "text": " in a previous two-minute paper episode, check it out, it is available in the video description."}, {"start": 237.0, "end": 241.44, "text": " The disadvantages of this approach are quite easy to identify this time around."}, {"start": 241.44, "end": 247.88, "text": " We have a very limited resolution for these output video streams that is 64 x 64 pixels"}, {"start": 247.88, "end": 254.39999999999998, "text": " for 32 frames, which even at a modest frame rate is just slightly over one second of footage."}, {"start": 254.4, "end": 259.36, "text": " The synthesized results vary greatly in quality, but it's remarkable to see that the machine"}, {"start": 259.36, "end": 264.92, "text": " can have a rough understanding of the concept of a large variety of movement and animation"}, {"start": 264.92, "end": 265.92, "text": " types."}, {"start": 265.92, "end": 270.2, "text": " It is really incredible to see that the neural network learns about the representations"}, {"start": 270.2, "end": 276.16, "text": " of these objects and how they move even when it wasn't explicitly instructed to do so."}, {"start": 276.16, "end": 279.4, "text": " We can also visualize what the neural network has learned."}, {"start": 279.4, "end": 285.12, "text": " This is done by finding different image inputs that make a particular neuron extremely excited."}, {"start": 285.12, "end": 290.35999999999996, "text": " Here, we see a collection of inputs including these activations for images of people and"}, {"start": 290.35999999999996, "end": 291.35999999999996, "text": " trains."}, {"start": 291.35999999999996, "end": 295.88, "text": " The author's website is definitely worthy of checking out as some of the sub-menus are"}, {"start": 295.88, "end": 297.67999999999995, "text": " quite ample in results."}, {"start": 297.67999999999995, "end": 303.59999999999997, "text": " Some amazing, some, well, a bit horrifying, but what is sure is that all of them are"}, {"start": 303.59999999999997, "end": 304.91999999999996, "text": " quite interesting."}, {"start": 304.92, "end": 309.84000000000003, "text": " And before we go, a huge shoutout to Las Lo Chambes, who helped us quite a bit in sorting"}, {"start": 309.84000000000003, "end": 312.56, "text": " out a number of technical issues with the series."}, {"start": 312.56, "end": 335.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7aLda2E0Yyg
Amazing Slow Motion Videos With Optical Flow | Two Minute Papers #119
The paper "An Iterative Image Registration Technique with an Application to Stereo Vision" is available here: http://cseweb.ucsd.edu/classes/sp02/cse252/lucaskanade81.pdf Our earlier episode on extrapolation: https://www.youtube.com/watch?v=AHl2JjGsu0s WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Other video credits: Simulating Viscosity and Melting Fluids - https://www.youtube.com/watch?v=KgIrnR2O8KQ&list=PLujxSBD-JXgnnd16wIjedAcvfQcLw0IJI&index=2 Modeling Colliding and Merging Fluids - https://www.youtube.com/watch?v=uj8b5mu0P7Y&list=PLujxSBD-JXgnnd16wIjedAcvfQcLw0IJI&index=8 Multiphase Fluid Simulations - https://www.youtube.com/watch?v=cUWDeDRet4c&list=PLujxSBD-JXgnnd16wIjedAcvfQcLw0IJI&index=11 Thumbnail background image credit: https://pixabay.com/photo-1032741/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. I am really excited to show this to you as I was looking to make this episode for quite a while. You'll see lots of beautiful slow motion footage during the narration, and at first it may seem disconnected from the narrative, but by the end of the video you'll understand why they look the way they do. Now before we proceed, let's talk about the difference between interpolation and extrapolation. One means that we have measurement points for a given quantity and would like to know what happened between these points. For instance, we have two samples of a person's location at 4 and at 5 o'clock, and we'd like to know where the guy was at 4.30. However, if we are doing extrapolation, we are interested in guessing a quantity beyond the reach of our sample points. For instance, extrapolation would be predicting what happens after the very last frame of the video. For our earlier episode, we talked about financial extrapolation, make sure to have a look it was super fun. The link is in the video description. Optical flow is really useful because it can do this kind of interpolation and extrapolation for images. So let's do one of them right now. You'll see which it will be in a second. So this is a classical scenario that we often encounter when producing a new 2 minute paper episode. This means that roughly every other frame is duplicated and offers no new information. You can see this as I step through these individual frames. And the more astute fellow scholars immediately would point out that wait. We have a lot of before and after image pairs, so we could do a lot better. Why don't we try to estimate what happened between these images? And that is exactly what we call frame interpolation. Interpolation because it is something between two known measurement points. And if we run the optical flow algorithm that can accomplish this, we can fill in these doubled frames with new ones that actually carry new information. So the ratio here was roughly 2 to 1. Roughly every other frame provides new information. Super cool. So what are the limits of this technique? What if we artificially slow the video down so that it's much longer, so not only every other, but most of the frames are just duplicates? This results in a boring and choppy animation. And we fill those in too, note that the basic optical flow equations are written for tiny little changes in position, so we shouldn't expect it to be able to extrapolate or interpolate any quantity over a longer period of time. But of course, it always depends on the type of motion we have at hand, so let's give it a try. As you can see with optical flow, the algorithm has an understanding of the motions that take place in the footage, and because of that, we can get some smooth, buttery, slow motion footage that is absolutely mesmerizing. Almost like shooting with a slow motion camera. And note that the majority of these frames were not containing any information, and this motion was synthesized from these distant sample points that are miles and miles away from each other. Really cool. However, it is also important to point out that optical flow is not a silver bullet, and it should be used with moderation and special care, as it can also introduce nasty artifacts like the one that you see here. This is due to an abrupt, high frequency change that is more difficult to predict than a slow and steady translation or rotation motion. To avoid these cases, we can use a much simpler frame interpolation technique that we call frame blending. This is a more naive technique that doesn't do any meaningful guesswork and computes the average of the two results. Why don't we give this one a try too? Or even better, let's have a look at the difference between the original choppy footage and the interpolated versions with frame blending and optical flow. If we do that, we see that frame blending is unlikely to give us nasty artifacts, but in return, the results are significantly more limited compared to optical flow because it doesn't have an understanding of the motion taking place in the footage. So the question is when to use which? Well, until we get an algorithm that is able to adaptively decide when to use what, it still comes down to individual judgment and sometimes quite a bit of trial and error. I'd like to make it extremely sure that you don't leave this video thinking that this is the only application of optical flows. It's just one of the coolest ones. But this motion estimation technique also has many other uses. For instance, if we have an unmanned aerial vehicle, it is really great if we can endow it with an optical flow sensor because then it will be able to know in which direction it needs to rotate to avoid a tree or whether it is stable or not at a given point in time. And with your support on Patreon, we were not only able to bump up the resolution of future two-minute paper's episodes to 4K, but we're also running them at true 60 frames per second, which means that every footage can undergo either a frame blending or optical flow step to make the animation smoother and more enjoyable for you. This takes a bit of human labor and is computationally expensive, but our new two-minute paper's rig is now capable of handling this. It is fantastic to see that you fellow scholars are willing to support the series, and through this we can introduce highly desirable improvements to the production pipeline. This is why we thank you at the end of every episode for your generous support. You fellow scholars are the best YouTube audience anywhere. And who knows, maybe one day, will be at a point where two-minute papers can be a full-time endeavor and will be able to make even more elaborate episodes. As I am tremendously enjoying making these videos, that would be absolutely amazing. Have you found any of these disturbing optical flow artifacts during this episode? Have you spotted some of these in other videos on YouTube? Let us know in the comments section so we can learn from each other. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.72, "end": 9.08, "text": " I am really excited to show this to you as I was looking to make this episode for quite"}, {"start": 9.08, "end": 10.08, "text": " a while."}, {"start": 10.08, "end": 14.42, "text": " You'll see lots of beautiful slow motion footage during the narration, and at first"}, {"start": 14.42, "end": 19.36, "text": " it may seem disconnected from the narrative, but by the end of the video you'll understand"}, {"start": 19.36, "end": 21.240000000000002, "text": " why they look the way they do."}, {"start": 21.240000000000002, "end": 28.080000000000002, "text": " Now before we proceed, let's talk about the difference between interpolation and extrapolation."}, {"start": 28.08, "end": 32.699999999999996, "text": " One means that we have measurement points for a given quantity and would like to know"}, {"start": 32.699999999999996, "end": 35.04, "text": " what happened between these points."}, {"start": 35.04, "end": 40.32, "text": " For instance, we have two samples of a person's location at 4 and at 5 o'clock, and we'd"}, {"start": 40.32, "end": 43.4, "text": " like to know where the guy was at 4.30."}, {"start": 43.4, "end": 48.68, "text": " However, if we are doing extrapolation, we are interested in guessing a quantity beyond"}, {"start": 48.68, "end": 51.04, "text": " the reach of our sample points."}, {"start": 51.04, "end": 56.08, "text": " For instance, extrapolation would be predicting what happens after the very last frame of the"}, {"start": 56.08, "end": 57.08, "text": " video."}, {"start": 57.08, "end": 61.44, "text": " For our earlier episode, we talked about financial extrapolation, make sure to have a look"}, {"start": 61.44, "end": 62.8, "text": " it was super fun."}, {"start": 62.8, "end": 64.96, "text": " The link is in the video description."}, {"start": 64.96, "end": 70.16, "text": " Optical flow is really useful because it can do this kind of interpolation and extrapolation"}, {"start": 70.16, "end": 71.4, "text": " for images."}, {"start": 71.4, "end": 73.96, "text": " So let's do one of them right now."}, {"start": 73.96, "end": 76.24, "text": " You'll see which it will be in a second."}, {"start": 76.24, "end": 81.4, "text": " So this is a classical scenario that we often encounter when producing a new 2 minute"}, {"start": 81.4, "end": 87.56, "text": " paper episode."}, {"start": 87.56, "end": 95.0, "text": " This means that roughly every other frame is duplicated and offers no new information."}, {"start": 95.0, "end": 98.36000000000001, "text": " You can see this as I step through these individual frames."}, {"start": 98.36000000000001, "end": 102.76, "text": " And the more astute fellow scholars immediately would point out that wait."}, {"start": 102.76, "end": 108.60000000000001, "text": " We have a lot of before and after image pairs, so we could do a lot better."}, {"start": 108.6, "end": 112.28, "text": " Why don't we try to estimate what happened between these images?"}, {"start": 112.28, "end": 116.47999999999999, "text": " And that is exactly what we call frame interpolation."}, {"start": 116.47999999999999, "end": 120.08, "text": " Interpolation because it is something between two known measurement points."}, {"start": 120.08, "end": 124.63999999999999, "text": " And if we run the optical flow algorithm that can accomplish this, we can fill in these"}, {"start": 124.63999999999999, "end": 137.24, "text": " doubled frames with new ones that actually carry new information."}, {"start": 137.24, "end": 140.32000000000002, "text": " So the ratio here was roughly 2 to 1."}, {"start": 140.32000000000002, "end": 143.52, "text": " Roughly every other frame provides new information."}, {"start": 143.52, "end": 148.84, "text": " Super cool."}, {"start": 148.84, "end": 151.52, "text": " So what are the limits of this technique?"}, {"start": 151.52, "end": 157.12, "text": " What if we artificially slow the video down so that it's much longer, so not only every"}, {"start": 157.12, "end": 161.04000000000002, "text": " other, but most of the frames are just duplicates?"}, {"start": 161.04000000000002, "end": 164.20000000000002, "text": " This results in a boring and choppy animation."}, {"start": 164.2, "end": 169.48, "text": " And we fill those in too, note that the basic optical flow equations are written for tiny"}, {"start": 169.48, "end": 174.83999999999997, "text": " little changes in position, so we shouldn't expect it to be able to extrapolate or interpolate"}, {"start": 174.83999999999997, "end": 177.92, "text": " any quantity over a longer period of time."}, {"start": 177.92, "end": 181.92, "text": " But of course, it always depends on the type of motion we have at hand, so let's give"}, {"start": 181.92, "end": 198.67999999999998, "text": " it a try."}, {"start": 198.67999999999998, "end": 203.83999999999997, "text": " As you can see with optical flow, the algorithm has an understanding of the motions that take"}, {"start": 203.83999999999997, "end": 209.0, "text": " place in the footage, and because of that, we can get some smooth, buttery, slow motion"}, {"start": 209.0, "end": 212.16, "text": " footage that is absolutely mesmerizing."}, {"start": 212.16, "end": 214.56, "text": " Almost like shooting with a slow motion camera."}, {"start": 214.56, "end": 219.36, "text": " And note that the majority of these frames were not containing any information, and this"}, {"start": 219.36, "end": 224.16, "text": " motion was synthesized from these distant sample points that are miles and miles away"}, {"start": 224.16, "end": 225.64, "text": " from each other."}, {"start": 225.64, "end": 226.64, "text": " Really cool."}, {"start": 226.64, "end": 231.76, "text": " However, it is also important to point out that optical flow is not a silver bullet,"}, {"start": 231.76, "end": 236.7, "text": " and it should be used with moderation and special care, as it can also introduce nasty"}, {"start": 236.7, "end": 239.2, "text": " artifacts like the one that you see here."}, {"start": 239.2, "end": 243.72, "text": " This is due to an abrupt, high frequency change that is more difficult to predict than"}, {"start": 243.72, "end": 246.95999999999998, "text": " a slow and steady translation or rotation motion."}, {"start": 246.95999999999998, "end": 252.26, "text": " To avoid these cases, we can use a much simpler frame interpolation technique that we call"}, {"start": 252.26, "end": 253.45999999999998, "text": " frame blending."}, {"start": 253.45999999999998, "end": 257.59999999999997, "text": " This is a more naive technique that doesn't do any meaningful guesswork and computes"}, {"start": 257.59999999999997, "end": 260.0, "text": " the average of the two results."}, {"start": 260.0, "end": 261.84, "text": " Why don't we give this one a try too?"}, {"start": 261.84, "end": 266.32, "text": " Or even better, let's have a look at the difference between the original choppy footage"}, {"start": 266.32, "end": 270.32, "text": " and the interpolated versions with frame blending and optical flow."}, {"start": 270.32, "end": 275.32, "text": " If we do that, we see that frame blending is unlikely to give us nasty artifacts, but"}, {"start": 275.32, "end": 280.48, "text": " in return, the results are significantly more limited compared to optical flow because"}, {"start": 280.48, "end": 284.48, "text": " it doesn't have an understanding of the motion taking place in the footage."}, {"start": 284.48, "end": 286.92, "text": " So the question is when to use which?"}, {"start": 286.92, "end": 292.28, "text": " Well, until we get an algorithm that is able to adaptively decide when to use what, it"}, {"start": 292.28, "end": 297.35999999999996, "text": " still comes down to individual judgment and sometimes quite a bit of trial and error."}, {"start": 297.35999999999996, "end": 301.59999999999997, "text": " I'd like to make it extremely sure that you don't leave this video thinking that this"}, {"start": 301.59999999999997, "end": 304.23999999999995, "text": " is the only application of optical flows."}, {"start": 304.23999999999995, "end": 306.08, "text": " It's just one of the coolest ones."}, {"start": 306.08, "end": 309.88, "text": " But this motion estimation technique also has many other uses."}, {"start": 309.88, "end": 314.88, "text": " For instance, if we have an unmanned aerial vehicle, it is really great if we can endow"}, {"start": 314.88, "end": 319.84, "text": " it with an optical flow sensor because then it will be able to know in which direction"}, {"start": 319.84, "end": 325.71999999999997, "text": " it needs to rotate to avoid a tree or whether it is stable or not at a given point in time."}, {"start": 325.71999999999997, "end": 330.28, "text": " And with your support on Patreon, we were not only able to bump up the resolution of"}, {"start": 330.28, "end": 335.91999999999996, "text": " future two-minute paper's episodes to 4K, but we're also running them at true 60 frames"}, {"start": 335.91999999999996, "end": 341.4, "text": " per second, which means that every footage can undergo either a frame blending or optical"}, {"start": 341.4, "end": 345.76, "text": " flow step to make the animation smoother and more enjoyable for you."}, {"start": 345.76, "end": 350.88, "text": " This takes a bit of human labor and is computationally expensive, but our new two-minute paper's"}, {"start": 350.88, "end": 353.4, "text": " rig is now capable of handling this."}, {"start": 353.4, "end": 358.36, "text": " It is fantastic to see that you fellow scholars are willing to support the series, and through"}, {"start": 358.36, "end": 363.03999999999996, "text": " this we can introduce highly desirable improvements to the production pipeline."}, {"start": 363.03999999999996, "end": 367.03999999999996, "text": " This is why we thank you at the end of every episode for your generous support."}, {"start": 367.03999999999996, "end": 370.48, "text": " You fellow scholars are the best YouTube audience anywhere."}, {"start": 370.48, "end": 375.32, "text": " And who knows, maybe one day, will be at a point where two-minute papers can be a full-time"}, {"start": 375.32, "end": 379.2, "text": " endeavor and will be able to make even more elaborate episodes."}, {"start": 379.2, "end": 384.28, "text": " As I am tremendously enjoying making these videos, that would be absolutely amazing."}, {"start": 384.28, "end": 388.92, "text": " Have you found any of these disturbing optical flow artifacts during this episode?"}, {"start": 388.92, "end": 392.2, "text": " Have you spotted some of these in other videos on YouTube?"}, {"start": 392.2, "end": 395.24, "text": " Let us know in the comments section so we can learn from each other."}, {"start": 395.24, "end": 397.48, "text": " Thanks for watching and for your generous support."}, {"start": 397.48, "end": 415.72, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=iOWamCtnwTc
Neural Network Learns The Physics of Fluids and Smoke | Two Minute Papers #118
The paper "Accelerating Eulerian Fluid Simulation With Convolutional Networks" and its source code is available here: http://cims.nyu.edu/~schlacht/CNNFluids.htm https://users.cg.tuwien.ac.at/zsolnai/accelerating-eulerian-fluid-simulation-convolutional-networks/ https://github.com/google/FluidNet The mentioned previous work has used an SPH-based Lagrangian simulation, performed the regression with regression forests, and the process also has included a fair amount of feature engineering. It is an excellent piece of work by the name "Data-driven Fluid Simulations using Regression Forests" and is a highly recommended read: https://www.inf.ethz.ch/personal/ladickyl/fluid_sigasia15.pdf https://www.youtube.com/watch?v=kGB7Wd9CudA Video credits: Surface-Only Liquids - https://www.youtube.com/watch?v=-rf_MDh-FiE&list=PLujxSBD-JXgnnd16wIjedAcvfQcLw0IJI&index=6 Schrödinger's Smoke - https://www.youtube.com/watch?v=heY2gfXSHBo&list=PLujxSBD-JXgnnd16wIjedAcvfQcLw0IJI&index=5 Thumbnail image background credit: https://pixabay.com/photo-889131/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojolna Ifehir. This piece of work is still in progress, done by one of the members of the Google Brain Research Team and several researchers from the amazing New York University. The goal was to show a neural network video footage of lots and lots of fluid and smoke simulations and have it learn how the dynamics work to the point that it can continue and guess how the behavior of a smoke puff would change in time. We stopped the video and it would learn how to continue it, if you will. Now that is a tall order if I've ever seen one. Most of this episode will not be about the technical details of this method, but about the importance and ramifications of such a technique. And since almost all the time our episodes are about already published works, it also makes a great case study on how to evaluate and think about the merits and shortcomings of a research project that is still in the works. This definitely is an interesting take as normally we use neural networks to solve problems that are otherwise close to impossible to tackle. Here the neural networks are applied to solve something that we already know how to solve. And the question immediately comes to mind why would anyone bother to do that? We've had the very least 20 episodes on different kinds of incredible fluid simulation techniques so it is abundantly clear that this is a problem that we can solve. However, the neural network does not only solve it correctly in a sense that the results are easily confused with real footage, but what's more, the execution time of the algorithm is in the order of a few milliseconds for a reasonably sized simulation. This normally takes several minutes with traditional techniques. It does something that we already know quite well how to do, but it does it better in many regards. Loving the idea behind this work. Learning is a pre-processing step that is a long and arduous process that only has to be done once and afterwards, querying the neural network that is predicting what happens next in the simulation runs almost immediately. In any case, in a way less time than calculating all the forces and pressures in the simulation while retaining high quality results. It is like the preparation for an exam that may take weeks, but when we are finally there in the examination room, if we are well prepared, we make short work of the puny questions the professor has presented us with. I am quietly noting that during my college years, I was also studying the beautiful Navier-Stokes equations and even as a highly motivated student, it took several months to understand the theory and write my first fluid simulator. This neural network can learn something very similar in a matter of days. What is stunning and may I say humiliating revelation? Note that this piece of work has not yet been peer reviewed. There are some side-by-side comparisons with real simulations to validate the accuracy of the algorithm, but more rigorous analysis is required before publishing. The failure cases for classical handcrafted techniques are easier to identify because of the fact that their mathematical description is available for scrutiny. In the case of a neural network, this piece of mathematics is also there, but it's not intuitive for human beings, therefore it is harder to assess when it works well and when it is expected to break down. We should be particularly vigilant about this fact when evaluating a task performed by any kind of neural network-based learning algorithm. For now, the results look quite reassuring, even the phenomenon of a smoke puff bouncing back from an object is modeled with high fidelity. There was a loosely related work from the ETH Zurich and Disney Research in Switzerland and enumerating the differences is a bit too technical for such a short video, but I have included it in the video description box for the more curious fellow scholars out there. Now you might have noticed the lack of the usual disclaimer in the thumbnail image stating that I did not take any part in the project, which was not the case this time. I feel that it is important to mention my affiliation even though my role in this project has been extremely tiny. You can read about this in the acknowledgment section of the paper. Needless to say, all the credit goes to the authors of this paper for this amazing idea. I envision all kinds of interactive digital media, including the video games of the future being infused with such neural networks for real-time fluid and smoke simulations. And let's not forget that this is only the first step. We haven't even talked about other kinds of perhaps learnable physical simulations with collision detection, shattering glassy objects and gooey soft body simulations. And we also have seen the very first results with light simulation pipelines that are augmented with neural networks. I think it is now a thinly veiled fact that I am extremely excited for this. And this piece of work is not the destination, but a stepping stone towards something truly remarkable. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.12, "text": " Dear Fellow Scholars, this is two-minute papers with Karojolna Ifehir."}, {"start": 5.12, "end": 9.88, "text": " This piece of work is still in progress, done by one of the members of the Google Brain"}, {"start": 9.88, "end": 14.56, "text": " Research Team and several researchers from the amazing New York University."}, {"start": 14.56, "end": 19.68, "text": " The goal was to show a neural network video footage of lots and lots of fluid and smoke"}, {"start": 19.68, "end": 25.080000000000002, "text": " simulations and have it learn how the dynamics work to the point that it can continue and"}, {"start": 25.080000000000002, "end": 29.400000000000002, "text": " guess how the behavior of a smoke puff would change in time."}, {"start": 29.4, "end": 33.48, "text": " We stopped the video and it would learn how to continue it, if you will."}, {"start": 33.48, "end": 36.839999999999996, "text": " Now that is a tall order if I've ever seen one."}, {"start": 36.839999999999996, "end": 42.0, "text": " Most of this episode will not be about the technical details of this method, but about the importance"}, {"start": 42.0, "end": 44.36, "text": " and ramifications of such a technique."}, {"start": 44.36, "end": 49.4, "text": " And since almost all the time our episodes are about already published works, it also"}, {"start": 49.4, "end": 55.08, "text": " makes a great case study on how to evaluate and think about the merits and shortcomings"}, {"start": 55.08, "end": 58.04, "text": " of a research project that is still in the works."}, {"start": 58.04, "end": 63.68, "text": " This definitely is an interesting take as normally we use neural networks to solve problems"}, {"start": 63.68, "end": 66.64, "text": " that are otherwise close to impossible to tackle."}, {"start": 66.64, "end": 72.24, "text": " Here the neural networks are applied to solve something that we already know how to solve."}, {"start": 72.24, "end": 77.6, "text": " And the question immediately comes to mind why would anyone bother to do that?"}, {"start": 77.6, "end": 83.88, "text": " We've had the very least 20 episodes on different kinds of incredible fluid simulation techniques"}, {"start": 83.88, "end": 87.92, "text": " so it is abundantly clear that this is a problem that we can solve."}, {"start": 87.92, "end": 93.0, "text": " However, the neural network does not only solve it correctly in a sense that the results"}, {"start": 93.0, "end": 98.6, "text": " are easily confused with real footage, but what's more, the execution time of the algorithm"}, {"start": 98.6, "end": 103.24000000000001, "text": " is in the order of a few milliseconds for a reasonably sized simulation."}, {"start": 103.24000000000001, "end": 106.64, "text": " This normally takes several minutes with traditional techniques."}, {"start": 106.64, "end": 111.6, "text": " It does something that we already know quite well how to do, but it does it better in many"}, {"start": 111.6, "end": 112.6, "text": " regards."}, {"start": 112.6, "end": 115.44, "text": " Loving the idea behind this work."}, {"start": 115.44, "end": 120.6, "text": " Learning is a pre-processing step that is a long and arduous process that only has to"}, {"start": 120.6, "end": 126.03999999999999, "text": " be done once and afterwards, querying the neural network that is predicting what happens"}, {"start": 126.03999999999999, "end": 130.04, "text": " next in the simulation runs almost immediately."}, {"start": 130.04, "end": 135.76, "text": " In any case, in a way less time than calculating all the forces and pressures in the simulation"}, {"start": 135.76, "end": 138.32, "text": " while retaining high quality results."}, {"start": 138.32, "end": 143.4, "text": " It is like the preparation for an exam that may take weeks, but when we are finally there"}, {"start": 143.4, "end": 148.72, "text": " in the examination room, if we are well prepared, we make short work of the puny questions"}, {"start": 148.72, "end": 150.8, "text": " the professor has presented us with."}, {"start": 150.8, "end": 156.8, "text": " I am quietly noting that during my college years, I was also studying the beautiful Navier-Stokes"}, {"start": 156.8, "end": 163.44, "text": " equations and even as a highly motivated student, it took several months to understand the theory"}, {"start": 163.44, "end": 165.88, "text": " and write my first fluid simulator."}, {"start": 165.88, "end": 171.04000000000002, "text": " This neural network can learn something very similar in a matter of days."}, {"start": 171.04, "end": 175.16, "text": " What is stunning and may I say humiliating revelation?"}, {"start": 175.16, "end": 178.2, "text": " Note that this piece of work has not yet been peer reviewed."}, {"start": 178.2, "end": 182.72, "text": " There are some side-by-side comparisons with real simulations to validate the accuracy"}, {"start": 182.72, "end": 187.56, "text": " of the algorithm, but more rigorous analysis is required before publishing."}, {"start": 187.56, "end": 192.32, "text": " The failure cases for classical handcrafted techniques are easier to identify because"}, {"start": 192.32, "end": 196.68, "text": " of the fact that their mathematical description is available for scrutiny."}, {"start": 196.68, "end": 201.28, "text": " In the case of a neural network, this piece of mathematics is also there, but it's not"}, {"start": 201.28, "end": 206.76000000000002, "text": " intuitive for human beings, therefore it is harder to assess when it works well and when"}, {"start": 206.76000000000002, "end": 208.52, "text": " it is expected to break down."}, {"start": 208.52, "end": 213.44, "text": " We should be particularly vigilant about this fact when evaluating a task performed by any"}, {"start": 213.44, "end": 216.44, "text": " kind of neural network-based learning algorithm."}, {"start": 216.44, "end": 221.76000000000002, "text": " For now, the results look quite reassuring, even the phenomenon of a smoke puff bouncing"}, {"start": 221.76000000000002, "end": 225.20000000000002, "text": " back from an object is modeled with high fidelity."}, {"start": 225.2, "end": 230.32, "text": " There was a loosely related work from the ETH Zurich and Disney Research in Switzerland"}, {"start": 230.32, "end": 234.79999999999998, "text": " and enumerating the differences is a bit too technical for such a short video, but I"}, {"start": 234.79999999999998, "end": 238.83999999999997, "text": " have included it in the video description box for the more curious fellow scholars out"}, {"start": 238.83999999999997, "end": 239.83999999999997, "text": " there."}, {"start": 239.83999999999997, "end": 244.23999999999998, "text": " Now you might have noticed the lack of the usual disclaimer in the thumbnail image stating"}, {"start": 244.23999999999998, "end": 248.6, "text": " that I did not take any part in the project, which was not the case this time."}, {"start": 248.6, "end": 253.67999999999998, "text": " I feel that it is important to mention my affiliation even though my role in this project"}, {"start": 253.67999999999998, "end": 255.16, "text": " has been extremely tiny."}, {"start": 255.16, "end": 258.28, "text": " You can read about this in the acknowledgment section of the paper."}, {"start": 258.28, "end": 263.48, "text": " Needless to say, all the credit goes to the authors of this paper for this amazing idea."}, {"start": 263.48, "end": 268.48, "text": " I envision all kinds of interactive digital media, including the video games of the future"}, {"start": 268.48, "end": 274.15999999999997, "text": " being infused with such neural networks for real-time fluid and smoke simulations."}, {"start": 274.15999999999997, "end": 277.24, "text": " And let's not forget that this is only the first step."}, {"start": 277.24, "end": 282.48, "text": " We haven't even talked about other kinds of perhaps learnable physical simulations"}, {"start": 282.48, "end": 288.08000000000004, "text": " with collision detection, shattering glassy objects and gooey soft body simulations."}, {"start": 288.08000000000004, "end": 293.44, "text": " And we also have seen the very first results with light simulation pipelines that are augmented"}, {"start": 293.44, "end": 294.8, "text": " with neural networks."}, {"start": 294.8, "end": 299.6, "text": " I think it is now a thinly veiled fact that I am extremely excited for this."}, {"start": 299.6, "end": 304.52000000000004, "text": " And this piece of work is not the destination, but a stepping stone towards something truly"}, {"start": 304.52000000000004, "end": 305.52000000000004, "text": " remarkable."}, {"start": 305.52, "end": 324.52, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dQSzmngTbtw
Stunning Video Game Graphics With Voxel Cone Tracing (VXGI) | Two Minute Papers #117
The paper "Interactive Indirect Illumination Using Voxel Cone Tracing" is available here: https://research.nvidia.com/publication/interactive-indirect-illumination-using-voxel-cone-tracing Implementations (without highlighting a particular one): https://goo.gl/AZeWAU Our post on Patreon on improvements you can expect from Two Minute Papers in 2017: https://www.patreon.com/posts/improvements-for-7607896 Rendering course at the TU Wien: https://www.youtube.com/watch?v=pjc1QAI6zS0&index=1&list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail image credit: https://pixabay.com/photo-1872196/ Video credits: BR34K - https://www.youtube.com/watch?v=h1hdAQQ3-Ck NVIDIA, Byzantos - https://www.youtube.com/watch?v=cH2_RkfStSk Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/ #rtx #rtxon
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. I consider this one to be one of the most influential papers in the field of light transport. Normally, to create a photorealistic image, we have to create a digital copy of a scene and simulate the paths of millions of light rays between the camera and the light sources. This is a very laborious task that usually takes from minutes to hours on a complex scene, noting that there are many well-known corner cases that can take up to days as well. As the rays of light can bounce around potentially indefinitely, and if we add that realistic materials and detailed scene geometry descriptions are not easy to handle mathematically, it is easy to see why this is a notoriously difficult problem. Simulating light transport in real time has been an enduring problem and is still not soft completely for every possible material model and light transport effect. However, voxel-con tracing is as good of a solution as one can wish for at the moment. The original formulation of the problem is continuous, which means that rays of light can bounce around in infinitely many directions, and the entirety of this digital world is considered to be a continuum. If we look at the mathematical formulation, we see infinities everywhere we look. If you would like to learn more about this, I am holding a master-level course at the technical university of Vienna, the entirety of which is available on YouTube. As always, the link is available in the video description for the more curious fellow scholars out there. If we try to approximate this continuous representation with tiny, tiny cubes, we get a version of the problem that is much less complex and easier to tackle. If we do this well, we can make it adaptive, which means that these cubes are smaller where there's a lot of information, so we don't lose out on many details. This data structure we call a sparse voxel-octree. For such a solution, mathematicians like to say that this technique works on a discretized version of the continuous problem. And since we are solving a vastly simplified version of the problem, the question is always whether this way we can remain true to the original solution. And the results show beauty, unlike anything we've seen in computer game graphics. Just look at this absolutely amazing footage, ice cream for my eyes, and all this runs in real time on your consumer graphics card. Imagine this in the virtual reality applications of the future. My goodness, I've chosen the absolute best profession. Also, this technique maps really well to the graphical card and is already implemented in Unreal Engine 4, and Nvidia has a framework, Gameworks, where they are experimenting with this in their project by the name VXGI. I had a very pleasant visit at Nvidia's Gameworks lab in Switzerland, not so long ago, friendly greetings to all the great and fun people in the team. Some kings still have to be worked out. For instance, there are still issues with light leaking through thin objects. Beyond that, the implementation of this algorithm contains a multitude of tiny little distinct elements. It is indeed true that many of the elements are puzzle pieces that are interchangeable and can be implemented in a number of different ways, and that's likely one of the reasons why Nvidia and others are still preparing their implementation for widespread industry use. Soon, we'll be able to solidify the details some more and see what the best practices are. Not wait to see this technique appear in the video games of the future. Note that this and future episodes will be available in 4K resolution for a significant bump in the visual quality of the series. It takes a ton of resources to produce these videos, but it's now possible through the support of you fellow scholars on Patreon. There's a more detailed write-up on that, I've included it in the video description. Thank you so much for supporting the show throughout 2016, and looking forward to continuing our journey together in 2017. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.88, "end": 10.88, "text": " I consider this one to be one of the most influential papers in the field of light transport."}, {"start": 10.88, "end": 15.96, "text": " Normally, to create a photorealistic image, we have to create a digital copy of a scene"}, {"start": 15.96, "end": 21.28, "text": " and simulate the paths of millions of light rays between the camera and the light sources."}, {"start": 21.28, "end": 27.080000000000002, "text": " This is a very laborious task that usually takes from minutes to hours on a complex scene,"}, {"start": 27.08, "end": 32.36, "text": " noting that there are many well-known corner cases that can take up to days as well."}, {"start": 32.36, "end": 37.56, "text": " As the rays of light can bounce around potentially indefinitely, and if we add that realistic"}, {"start": 37.56, "end": 43.239999999999995, "text": " materials and detailed scene geometry descriptions are not easy to handle mathematically, it"}, {"start": 43.239999999999995, "end": 47.599999999999994, "text": " is easy to see why this is a notoriously difficult problem."}, {"start": 47.599999999999994, "end": 52.04, "text": " Simulating light transport in real time has been an enduring problem and is still not"}, {"start": 52.04, "end": 57.12, "text": " soft completely for every possible material model and light transport effect."}, {"start": 57.12, "end": 62.84, "text": " However, voxel-con tracing is as good of a solution as one can wish for at the moment."}, {"start": 62.84, "end": 68.12, "text": " The original formulation of the problem is continuous, which means that rays of light can bounce"}, {"start": 68.12, "end": 73.44, "text": " around in infinitely many directions, and the entirety of this digital world is considered"}, {"start": 73.44, "end": 75.08, "text": " to be a continuum."}, {"start": 75.08, "end": 79.8, "text": " If we look at the mathematical formulation, we see infinities everywhere we look."}, {"start": 79.8, "end": 83.64, "text": " If you would like to learn more about this, I am holding a master-level course at the"}, {"start": 83.64, "end": 88.44, "text": " technical university of Vienna, the entirety of which is available on YouTube."}, {"start": 88.44, "end": 92.96, "text": " As always, the link is available in the video description for the more curious fellow scholars"}, {"start": 92.96, "end": 93.96, "text": " out there."}, {"start": 93.96, "end": 99.64, "text": " If we try to approximate this continuous representation with tiny, tiny cubes, we get a version of"}, {"start": 99.64, "end": 103.75999999999999, "text": " the problem that is much less complex and easier to tackle."}, {"start": 103.75999999999999, "end": 108.8, "text": " If we do this well, we can make it adaptive, which means that these cubes are smaller"}, {"start": 108.8, "end": 113.32, "text": " where there's a lot of information, so we don't lose out on many details."}, {"start": 113.32, "end": 117.36, "text": " This data structure we call a sparse voxel-octree."}, {"start": 117.36, "end": 122.28, "text": " For such a solution, mathematicians like to say that this technique works on a discretized"}, {"start": 122.28, "end": 124.64, "text": " version of the continuous problem."}, {"start": 124.64, "end": 129.56, "text": " And since we are solving a vastly simplified version of the problem, the question is always"}, {"start": 129.56, "end": 133.6, "text": " whether this way we can remain true to the original solution."}, {"start": 133.6, "end": 139.16, "text": " And the results show beauty, unlike anything we've seen in computer game graphics."}, {"start": 139.16, "end": 145.16, "text": " Just look at this absolutely amazing footage, ice cream for my eyes, and all this runs in"}, {"start": 145.16, "end": 148.56, "text": " real time on your consumer graphics card."}, {"start": 148.56, "end": 152.56, "text": " Imagine this in the virtual reality applications of the future."}, {"start": 152.56, "end": 156.07999999999998, "text": " My goodness, I've chosen the absolute best profession."}, {"start": 156.07999999999998, "end": 161.12, "text": " Also, this technique maps really well to the graphical card and is already implemented"}, {"start": 161.12, "end": 167.16, "text": " in Unreal Engine 4, and Nvidia has a framework, Gameworks, where they are experimenting with"}, {"start": 167.16, "end": 170.64000000000001, "text": " this in their project by the name VXGI."}, {"start": 170.64000000000001, "end": 176.20000000000002, "text": " I had a very pleasant visit at Nvidia's Gameworks lab in Switzerland, not so long ago, friendly"}, {"start": 176.20000000000002, "end": 179.48000000000002, "text": " greetings to all the great and fun people in the team."}, {"start": 179.48000000000002, "end": 181.92000000000002, "text": " Some kings still have to be worked out."}, {"start": 181.92000000000002, "end": 186.6, "text": " For instance, there are still issues with light leaking through thin objects."}, {"start": 186.6, "end": 192.32, "text": " Beyond that, the implementation of this algorithm contains a multitude of tiny little distinct"}, {"start": 192.32, "end": 193.32, "text": " elements."}, {"start": 193.32, "end": 198.2, "text": " It is indeed true that many of the elements are puzzle pieces that are interchangeable"}, {"start": 198.2, "end": 202.79999999999998, "text": " and can be implemented in a number of different ways, and that's likely one of the reasons"}, {"start": 202.79999999999998, "end": 207.88, "text": " why Nvidia and others are still preparing their implementation for widespread industry"}, {"start": 207.88, "end": 208.88, "text": " use."}, {"start": 208.88, "end": 213.95999999999998, "text": " Soon, we'll be able to solidify the details some more and see what the best practices"}, {"start": 213.95999999999998, "end": 214.95999999999998, "text": " are."}, {"start": 214.96, "end": 219.12, "text": " Not wait to see this technique appear in the video games of the future."}, {"start": 219.12, "end": 224.76000000000002, "text": " Note that this and future episodes will be available in 4K resolution for a significant"}, {"start": 224.76000000000002, "end": 227.24, "text": " bump in the visual quality of the series."}, {"start": 227.24, "end": 231.96, "text": " It takes a ton of resources to produce these videos, but it's now possible through the"}, {"start": 231.96, "end": 234.96, "text": " support of you fellow scholars on Patreon."}, {"start": 234.96, "end": 238.96, "text": " There's a more detailed write-up on that, I've included it in the video description."}, {"start": 238.96, "end": 244.24, "text": " Thank you so much for supporting the show throughout 2016, and looking forward to continuing"}, {"start": 244.24, "end": 247.0, "text": " our journey together in 2017."}, {"start": 247.0, "end": 275.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=rAbhypxs1qQ
Image Synthesis From Text With Deep Learning | Two Minute Papers #116
The paper "StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks" is available here: https://arxiv.org/abs/1612.03242 Source code for this project is also available here: https://github.com/hanzhanggit/StackGAN We have a Patreon post on the improvements you can expect from Two Minute Papers in 2017. Lots of goodies behind the link, have a look! https://www.patreon.com/posts/7607896 Our previous episode on Recurrent Neural Networks: https://www.youtube.com/watch?v=Jkkjy7dVdaY Recurrent Neural Network Writes Sentences About Images: https://www.youtube.com/watch?v=e-WB4lfg30M WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credit - https://pixabay.com/photo-1616713/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. This is what we have been waiting for. Earlier, we talked about a neural network that was able to describe in a full sentence what we can see on an image, and it had done a damn good job at that. Then, we have talked about the technique that did something really crazy to exact opposite. We wrote a sentence and it created new images according to that. This is already incredible and we can create an algorithm like this by training not one, but two neural networks. The first is the generative network that creates millions of new images and the discriminator network judges whether these are real or fake images. The generative network can improve its game based on the feedback and will create more and more realistic looking images while the discriminator network gets better and better at telling real images from fake ones. Like humans, this rivery drives both neural networks towards perfecting their crafts. This architecture is called a generative adversarial network. It is also like the classical ever-going arms race between criminals who create counterfeit money and the government which seeks to implement newer and newer measures to tell a real hundred dollar bill from a fake one. The previous generative adversarial networks were adept at creating new images, but due to their limitations their image outputs were the size of a stamp at best. And we were wondering how long until we get much higher resolution images from such a system. Well, I am delighted to say that apparently within the same year, in this work a two-stage version of this architecture is proposed. The Stage 1 network is close to the generative adversarial networks we described and most of the fun happens in the Stage 2 network that takes this rough, low resolution image and the text description and is told to correct the defects of the previous output and create a higher resolution version of it. In the video, the input text description and the Stage 1 results are shown and building on that the higher resolution Stage 2 images are presented. And the results are unreal. There was a previous article and two minute papers episode on the unreasonable effectiveness of recurrent neural networks. If that is unreasonable effectiveness, then what is this? The rate of progress in machine learning research is unlike any other field I have ever seen. I honestly can't believe what I am seeing here. Dear Fellow Scholars, what you see might very well be history in the making. Are there still faults in the results? Of course there are. Are they perfect? No, they certainly aren't. However, research is all about progress and it's almost never possible to go from zero to 100% with one new revolutionary idea. However, I am sure that in 2017 researchers will start working on generating full HD animations with an improved version of this architecture. Make sure to have a look at the paper where the ideas, challenges and possible solutions are very clearly presented. And for now, I need some time to digest these results. Currently, I feel like being dropped into the middle of a science fiction movie. And this one will be our last video for this year. We have had an amazing year with some incredible growth on the channel. Way more of you Fellow Scholars decided to come with us on our journey than I would have imagined. Thank you so much for being a part of two-minute papers. We'll be continuing full steam ahead next year and for now, I wish you a Merry Christmas and Happy Holidays. 2016 was an amazing year for research and 2017 will be even better. Stay tuned. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.2, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir."}, {"start": 5.2, "end": 7.48, "text": " This is what we have been waiting for."}, {"start": 7.48, "end": 12.68, "text": " Earlier, we talked about a neural network that was able to describe in a full sentence"}, {"start": 12.68, "end": 16.92, "text": " what we can see on an image, and it had done a damn good job at that."}, {"start": 16.92, "end": 22.44, "text": " Then, we have talked about the technique that did something really crazy to exact opposite."}, {"start": 22.44, "end": 26.68, "text": " We wrote a sentence and it created new images according to that."}, {"start": 26.68, "end": 32.0, "text": " This is already incredible and we can create an algorithm like this by training not one,"}, {"start": 32.0, "end": 33.68, "text": " but two neural networks."}, {"start": 33.68, "end": 38.92, "text": " The first is the generative network that creates millions of new images and the discriminator"}, {"start": 38.92, "end": 42.72, "text": " network judges whether these are real or fake images."}, {"start": 42.72, "end": 47.239999999999995, "text": " The generative network can improve its game based on the feedback and will create more"}, {"start": 47.239999999999995, "end": 51.96, "text": " and more realistic looking images while the discriminator network gets better and better"}, {"start": 51.96, "end": 54.68, "text": " at telling real images from fake ones."}, {"start": 54.68, "end": 60.24, "text": " Like humans, this rivery drives both neural networks towards perfecting their crafts."}, {"start": 60.24, "end": 64.24, "text": " This architecture is called a generative adversarial network."}, {"start": 64.24, "end": 69.32, "text": " It is also like the classical ever-going arms race between criminals who create counterfeit"}, {"start": 69.32, "end": 74.68, "text": " money and the government which seeks to implement newer and newer measures to tell a real"}, {"start": 74.68, "end": 77.16, "text": " hundred dollar bill from a fake one."}, {"start": 77.16, "end": 82.48, "text": " The previous generative adversarial networks were adept at creating new images, but due"}, {"start": 82.48, "end": 87.24000000000001, "text": " to their limitations their image outputs were the size of a stamp at best."}, {"start": 87.24000000000001, "end": 92.84, "text": " And we were wondering how long until we get much higher resolution images from such a system."}, {"start": 92.84, "end": 98.52000000000001, "text": " Well, I am delighted to say that apparently within the same year, in this work a two-stage"}, {"start": 98.52000000000001, "end": 101.0, "text": " version of this architecture is proposed."}, {"start": 101.0, "end": 106.16, "text": " The Stage 1 network is close to the generative adversarial networks we described and most"}, {"start": 106.16, "end": 111.80000000000001, "text": " of the fun happens in the Stage 2 network that takes this rough, low resolution image"}, {"start": 111.8, "end": 117.12, "text": " and the text description and is told to correct the defects of the previous output and create"}, {"start": 117.12, "end": 119.24, "text": " a higher resolution version of it."}, {"start": 119.24, "end": 123.92, "text": " In the video, the input text description and the Stage 1 results are shown and building"}, {"start": 123.92, "end": 128.28, "text": " on that the higher resolution Stage 2 images are presented."}, {"start": 128.28, "end": 130.84, "text": " And the results are unreal."}, {"start": 130.84, "end": 135.8, "text": " There was a previous article and two minute papers episode on the unreasonable effectiveness"}, {"start": 135.8, "end": 137.68, "text": " of recurrent neural networks."}, {"start": 137.68, "end": 141.24, "text": " If that is unreasonable effectiveness, then what is this?"}, {"start": 141.24, "end": 147.04000000000002, "text": " The rate of progress in machine learning research is unlike any other field I have ever seen."}, {"start": 147.04000000000002, "end": 150.48000000000002, "text": " I honestly can't believe what I am seeing here."}, {"start": 150.48000000000002, "end": 155.64000000000001, "text": " Dear Fellow Scholars, what you see might very well be history in the making."}, {"start": 155.64000000000001, "end": 157.88, "text": " Are there still faults in the results?"}, {"start": 157.88, "end": 159.04000000000002, "text": " Of course there are."}, {"start": 159.04000000000002, "end": 160.04000000000002, "text": " Are they perfect?"}, {"start": 160.04000000000002, "end": 162.24, "text": " No, they certainly aren't."}, {"start": 162.24, "end": 167.96, "text": " However, research is all about progress and it's almost never possible to go from zero"}, {"start": 167.96, "end": 171.76000000000002, "text": " to 100% with one new revolutionary idea."}, {"start": 171.76000000000002, "end": 178.92000000000002, "text": " However, I am sure that in 2017 researchers will start working on generating full HD animations"}, {"start": 178.92000000000002, "end": 181.32, "text": " with an improved version of this architecture."}, {"start": 181.32, "end": 186.0, "text": " Make sure to have a look at the paper where the ideas, challenges and possible solutions"}, {"start": 186.0, "end": 188.16, "text": " are very clearly presented."}, {"start": 188.16, "end": 191.44, "text": " And for now, I need some time to digest these results."}, {"start": 191.44, "end": 196.28, "text": " Currently, I feel like being dropped into the middle of a science fiction movie."}, {"start": 196.28, "end": 199.24, "text": " And this one will be our last video for this year."}, {"start": 199.24, "end": 203.56, "text": " We have had an amazing year with some incredible growth on the channel."}, {"start": 203.56, "end": 207.72, "text": " Way more of you Fellow Scholars decided to come with us on our journey than I would have"}, {"start": 207.72, "end": 208.72, "text": " imagined."}, {"start": 208.72, "end": 211.8, "text": " Thank you so much for being a part of two-minute papers."}, {"start": 211.8, "end": 216.76, "text": " We'll be continuing full steam ahead next year and for now, I wish you a Merry Christmas"}, {"start": 216.76, "end": 218.16, "text": " and Happy Holidays."}, {"start": 218.16, "end": 223.8, "text": " 2016 was an amazing year for research and 2017 will be even better."}, {"start": 223.8, "end": 224.8, "text": " Stay tuned."}, {"start": 224.8, "end": 228.48000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=PMSV7CjBuZI
Crumpling Sound Synthesis | Two Minute Papers #115
The paper "Crumpling Sound Synthesis" is available here: http://www.cs.columbia.edu/cg/crumpling/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karojol Nefahir. Hmmmm. Today, we are going to crush some soda cans. In the footage that you see here, the animations are performed by an already existing algorithm for thin shell deformations. And for a complete sensorial experience, this piece of work aims to synthesize sound for these phenomena. Sounds for crumpling up all kinds of candy wraps, foils, and plastic bags. A lofty, noble goal. Loving it. However, this problem is extraordinarily difficult. The reason is that these crumpling simulations are amazingly detailed, and even if we knew all the physical laws for the sound synthesis, which is already pretty crazy, it would still be a fruitless endeavor to take into consideration every single thing that takes place in the simulation. We have to come up with ways to cut corners to decrease the execution time of our algorithm. Running a naive, exhaustive search would take tens of hours for only several seconds of footage. And the big question is, of course, what can we do about it? And before we proceed, just a quick reminder that the geometry of these models are given by a lot of connected points that people in computer graphics like to call vertices. The sound synthesis takes place by observing the changes in the stiffness of these models, which is the source of the crumpling noise. Normally, our sound simulation scales with a number of vertices, and it is abundantly clear that there are simply too many of them to go through one by one. To this end, we should strive to reduce the complexity of this problem. First, we start with identifying and discarding the less significant vibration modes. Beyond that, if in one of these vertices, we observe that the similar kind of buckling behavior is presenting its neighborhood, we group up these vertices into a patch. And we then forget about the vertices and run the sound synthesis on these patches. And of course, the number of patches is significantly less than the number of vertices in the original model. In this footage, you can see some of these patches, and it turns out that the execution time can be significantly decreased by these optimizations. With these techniques, we can expect results in at least five times quicker. But if we are willing to introduce slight degradations to the quality of the sounds, we can even go ten times quicker with barely perceptible changes. To evaluate the quality of the solutions, there is a user study presented in the paper, and the pinnacle of all tests is, of course, when we let reality be our judge. Everything so far sounds great on paper, but how does it compare to what we experience in reality? Wow, truly excellent results. Suffice to say they are absolutely crushing it. And we haven't even talked about stochastic enrichment and how one of these problems can be solved optimally via dynamic programming. If you are interested, make sure to have a look at the paper. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karojol Nefahir."}, {"start": 5.0, "end": 6.0, "text": " Hmmmm."}, {"start": 6.0, "end": 11.0, "text": " Today, we are going to crush some soda cans."}, {"start": 11.0, "end": 18.0, "text": " In the footage that you see here, the animations are performed by an already existing algorithm for thin shell deformations."}, {"start": 18.0, "end": 25.0, "text": " And for a complete sensorial experience, this piece of work aims to synthesize sound for these phenomena."}, {"start": 25.0, "end": 30.0, "text": " Sounds for crumpling up all kinds of candy wraps, foils, and plastic bags."}, {"start": 30.0, "end": 32.0, "text": " A lofty, noble goal."}, {"start": 32.0, "end": 33.0, "text": " Loving it."}, {"start": 33.0, "end": 36.0, "text": " However, this problem is extraordinarily difficult."}, {"start": 36.0, "end": 46.0, "text": " The reason is that these crumpling simulations are amazingly detailed, and even if we knew all the physical laws for the sound synthesis, which is already pretty crazy,"}, {"start": 46.0, "end": 52.0, "text": " it would still be a fruitless endeavor to take into consideration every single thing that takes place in the simulation."}, {"start": 52.0, "end": 58.0, "text": " We have to come up with ways to cut corners to decrease the execution time of our algorithm."}, {"start": 58.0, "end": 64.0, "text": " Running a naive, exhaustive search would take tens of hours for only several seconds of footage."}, {"start": 64.0, "end": 67.0, "text": " And the big question is, of course, what can we do about it?"}, {"start": 67.0, "end": 77.0, "text": " And before we proceed, just a quick reminder that the geometry of these models are given by a lot of connected points that people in computer graphics like to call vertices."}, {"start": 77.0, "end": 85.0, "text": " The sound synthesis takes place by observing the changes in the stiffness of these models, which is the source of the crumpling noise."}, {"start": 85.0, "end": 94.0, "text": " Normally, our sound simulation scales with a number of vertices, and it is abundantly clear that there are simply too many of them to go through one by one."}, {"start": 94.0, "end": 98.0, "text": " To this end, we should strive to reduce the complexity of this problem."}, {"start": 98.0, "end": 103.0, "text": " First, we start with identifying and discarding the less significant vibration modes."}, {"start": 103.0, "end": 113.0, "text": " Beyond that, if in one of these vertices, we observe that the similar kind of buckling behavior is presenting its neighborhood, we group up these vertices into a patch."}, {"start": 113.0, "end": 119.0, "text": " And we then forget about the vertices and run the sound synthesis on these patches."}, {"start": 119.0, "end": 125.0, "text": " And of course, the number of patches is significantly less than the number of vertices in the original model."}, {"start": 125.0, "end": 133.0, "text": " In this footage, you can see some of these patches, and it turns out that the execution time can be significantly decreased by these optimizations."}, {"start": 133.0, "end": 137.0, "text": " With these techniques, we can expect results in at least five times quicker."}, {"start": 137.0, "end": 146.0, "text": " But if we are willing to introduce slight degradations to the quality of the sounds, we can even go ten times quicker with barely perceptible changes."}, {"start": 146.0, "end": 154.0, "text": " To evaluate the quality of the solutions, there is a user study presented in the paper, and the pinnacle of all tests is, of course,"}, {"start": 154.0, "end": 156.0, "text": " when we let reality be our judge."}, {"start": 156.0, "end": 185.0, "text": " Everything so far sounds great on paper, but how does it compare to what we experience in reality?"}, {"start": 186.0, "end": 201.0, "text": " Wow, truly excellent results. Suffice to say they are absolutely crushing it."}, {"start": 201.0, "end": 209.0, "text": " And we haven't even talked about stochastic enrichment and how one of these problems can be solved optimally via dynamic programming."}, {"start": 209.0, "end": 212.0, "text": " If you are interested, make sure to have a look at the paper."}, {"start": 212.0, "end": 216.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=j7XWCCvBrwU
3D Printing Flexible Shells For Molding | Two Minute Papers #114
The paper "FlexMolds: Automatic Design of Flexible Shells for Molding" is available here: http://vcg.isti.cnr.it/Publications/2016/MPBC16/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir. This work is about 3D printing flexible molds for objects with detailed geometry. The main observation is that the final object not only has to be cast, but also has to be removed conveniently from the mold. Finding an appropriate layout for the cuts is a non-trivial problem. The technique endeavors to have the least amount of cuts and the length of the cuts is also subject to minimization. I see the light bulb lighting up in the heads of our seasoned Fellow Scholars immediately noticing that this sounds like an optimization problem. And in this problem, we start out from a dense cut layout and iteratively remove as many of these cuts as possible until some prescribed threshold is met. However, we have to be vigilant about the fact that these cuts will result in deformations during the removal process. We mentioned before that we are interested in shapes that have geometry that is rich in details, therefore this distortion effect is to be minimized aggressively. Also, we cannot remove these cuts indefinitely because sometimes more cuts have to be added to reduce the stress induced by the removal process. This is a cunning plan. However, a plan that only works if we can predict where and how these deformations will happen, therefore we have to simulate this process on our computer. During removal, forces are applied to the mold, which we also have to take into consideration. To this end, there is an actual simulation of the entirety of the extraction process to make sure that the material can be removed from the mold in a non-destructive manner. Wow! The paper discusses tons of issues that arise from this problem formulation. For instance, what one should do with the tiny air bubbles stuck in the resin. Or the optimization part is also non-trivial to which a highly effective homebrew solution is presented. And there's a lot more. Make sure to have a look. Of course, as always, we would love to hear your ideas about possible applications of this technique. Leave your thoughts in the comments section. Thanks for watching and for your generous support. For now, thank you very much.
[{"start": 0.0, "end": 5.2, "text": " Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir."}, {"start": 5.2, "end": 10.72, "text": " This work is about 3D printing flexible molds for objects with detailed geometry."}, {"start": 10.72, "end": 15.84, "text": " The main observation is that the final object not only has to be cast, but also has to be"}, {"start": 15.84, "end": 18.68, "text": " removed conveniently from the mold."}, {"start": 18.68, "end": 22.28, "text": " Finding an appropriate layout for the cuts is a non-trivial problem."}, {"start": 22.28, "end": 27.48, "text": " The technique endeavors to have the least amount of cuts and the length of the cuts is also"}, {"start": 27.48, "end": 29.2, "text": " subject to minimization."}, {"start": 29.2, "end": 34.44, "text": " I see the light bulb lighting up in the heads of our seasoned Fellow Scholars immediately"}, {"start": 34.44, "end": 37.879999999999995, "text": " noticing that this sounds like an optimization problem."}, {"start": 37.879999999999995, "end": 43.36, "text": " And in this problem, we start out from a dense cut layout and iteratively remove as many"}, {"start": 43.36, "end": 47.76, "text": " of these cuts as possible until some prescribed threshold is met."}, {"start": 47.76, "end": 52.92, "text": " However, we have to be vigilant about the fact that these cuts will result in deformations"}, {"start": 52.92, "end": 55.04, "text": " during the removal process."}, {"start": 55.04, "end": 59.36, "text": " We mentioned before that we are interested in shapes that have geometry that is rich in"}, {"start": 59.36, "end": 64.12, "text": " details, therefore this distortion effect is to be minimized aggressively."}, {"start": 64.12, "end": 69.68, "text": " Also, we cannot remove these cuts indefinitely because sometimes more cuts have to be added"}, {"start": 69.68, "end": 73.28, "text": " to reduce the stress induced by the removal process."}, {"start": 73.28, "end": 74.8, "text": " This is a cunning plan."}, {"start": 74.8, "end": 81.56, "text": " However, a plan that only works if we can predict where and how these deformations will happen,"}, {"start": 81.56, "end": 85.4, "text": " therefore we have to simulate this process on our computer."}, {"start": 85.4, "end": 90.68, "text": " During removal, forces are applied to the mold, which we also have to take into consideration."}, {"start": 90.68, "end": 95.64, "text": " To this end, there is an actual simulation of the entirety of the extraction process"}, {"start": 95.64, "end": 100.64, "text": " to make sure that the material can be removed from the mold in a non-destructive manner."}, {"start": 100.64, "end": 101.96000000000001, "text": " Wow!"}, {"start": 101.96000000000001, "end": 106.72, "text": " The paper discusses tons of issues that arise from this problem formulation."}, {"start": 106.72, "end": 111.6, "text": " For instance, what one should do with the tiny air bubbles stuck in the resin."}, {"start": 111.6, "end": 117.4, "text": " Or the optimization part is also non-trivial to which a highly effective homebrew solution"}, {"start": 117.4, "end": 118.56, "text": " is presented."}, {"start": 118.56, "end": 119.8, "text": " And there's a lot more."}, {"start": 119.8, "end": 121.0, "text": " Make sure to have a look."}, {"start": 121.0, "end": 125.6, "text": " Of course, as always, we would love to hear your ideas about possible applications of"}, {"start": 125.6, "end": 126.6, "text": " this technique."}, {"start": 126.6, "end": 140.04, "text": " Leave your thoughts in the comments section."}, {"start": 140.04, "end": 144.88, "text": " Thanks for watching and for your generous support."}, {"start": 144.88, "end": 159.2, "text": " For now, thank you very much."}]
Two Minute Papers
https://www.youtube.com/watch?v=cUWDeDRet4c
Multiphase Fluid Simulations | Two Minute Papers #113
The paper "Multiphase SPH Simulation for Interactive Fluids and Solids" is available here: http://cg.cs.tsinghua.edu.cn/papers/SIG_2016_Multiphase.pdf http://cg.cs.tsinghua.edu.cn/research.htm WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credits - https://pixabay.com/photo-165192/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. What a wonderful day to talk about fluid simulations. This technique is an extension to smooth particle hydrodynamics or SPH in short, which is a widely used particle-based simulation technique where the visual quality scales with the number of simulated particles. The more particles we use in the simulation, the more eye candy we can expect. And the goal here is to create an extension of this SPH-based simulations to include deformable bodies and granular materials to the computations. This way, it is possible to create a scene where we have instant coffee and soft candy dissolving in water, which not only looks beautiful, but sounds like a great way to get your day started. Normally, we have to solve a separate set of equations for each of the phases or material types present, but because of this proposed method, it is able to put them in one unified equation, it scales well with the number of materials within the simulation. This is not only convenient from a theoretical standpoint, but it also maps well to parallel architectures, and the results shown in the video were run on a relatively high end consumer and video card. This is remarkable as it should not be taken for granted that a new fluid simulation technique runs well on the GPU. The results indeed indicate that the number of phases only have a mild effect on the execution time of the algorithm. A nice and general framework for fluid solid interactions, these solutions, elastoplastic solids, and deformable bodies. What a fantastic value proposition. I could watch and play with these all day. I'll try my best to resist, but in case the next episode is coming late, you know where I am. The quality of the paper is absolutely top tier, and if you like physics, you're going to have lots of fun reading it. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.0, "end": 8.8, "text": " What a wonderful day to talk about fluid simulations."}, {"start": 8.8, "end": 14.56, "text": " This technique is an extension to smooth particle hydrodynamics or SPH in short, which is a"}, {"start": 14.56, "end": 19.88, "text": " widely used particle-based simulation technique where the visual quality scales with the number"}, {"start": 19.88, "end": 21.76, "text": " of simulated particles."}, {"start": 21.76, "end": 26.52, "text": " The more particles we use in the simulation, the more eye candy we can expect."}, {"start": 26.52, "end": 31.52, "text": " And the goal here is to create an extension of this SPH-based simulations to include"}, {"start": 31.52, "end": 35.56, "text": " deformable bodies and granular materials to the computations."}, {"start": 35.56, "end": 40.92, "text": " This way, it is possible to create a scene where we have instant coffee and soft candy"}, {"start": 40.92, "end": 46.08, "text": " dissolving in water, which not only looks beautiful, but sounds like a great way to get"}, {"start": 46.08, "end": 47.480000000000004, "text": " your day started."}, {"start": 47.480000000000004, "end": 52.8, "text": " Normally, we have to solve a separate set of equations for each of the phases or material"}, {"start": 52.8, "end": 58.08, "text": " types present, but because of this proposed method, it is able to put them in one unified"}, {"start": 58.08, "end": 62.879999999999995, "text": " equation, it scales well with the number of materials within the simulation."}, {"start": 62.879999999999995, "end": 67.96, "text": " This is not only convenient from a theoretical standpoint, but it also maps well to parallel"}, {"start": 67.96, "end": 73.28, "text": " architectures, and the results shown in the video were run on a relatively high end consumer"}, {"start": 73.28, "end": 74.96, "text": " and video card."}, {"start": 74.96, "end": 79.67999999999999, "text": " This is remarkable as it should not be taken for granted that a new fluid simulation"}, {"start": 79.67999999999999, "end": 82.03999999999999, "text": " technique runs well on the GPU."}, {"start": 82.04, "end": 87.16000000000001, "text": " The results indeed indicate that the number of phases only have a mild effect on the execution"}, {"start": 87.16000000000001, "end": 88.80000000000001, "text": " time of the algorithm."}, {"start": 88.80000000000001, "end": 94.84, "text": " A nice and general framework for fluid solid interactions, these solutions, elastoplastic"}, {"start": 94.84, "end": 97.80000000000001, "text": " solids, and deformable bodies."}, {"start": 97.80000000000001, "end": 100.0, "text": " What a fantastic value proposition."}, {"start": 100.0, "end": 102.60000000000001, "text": " I could watch and play with these all day."}, {"start": 102.60000000000001, "end": 107.16000000000001, "text": " I'll try my best to resist, but in case the next episode is coming late, you know where"}, {"start": 107.16000000000001, "end": 108.16000000000001, "text": " I am."}, {"start": 108.16, "end": 112.6, "text": " The quality of the paper is absolutely top tier, and if you like physics, you're going"}, {"start": 112.6, "end": 114.47999999999999, "text": " to have lots of fun reading it."}, {"start": 114.48, "end": 141.24, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=tB0AVkPDDJU
Precomputed Deformation Simulations | Two Minute Papers #112
The paper "Expediting Precomputation for Reduced Deformable Simulation " is available here: http://www.cs.columbia.edu/cg/fastprecomp/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejolene Fahir. This piece of work is about reducing the time needed to simulate elastic deformations by means of pre-computation. Okay, so what does the term pre-computation mean? If we are at an open book exam and we are short on time, which is basically every time, it would be much better to do a pre-computation step, namely studying at home for a few days before, and then, when we are there, we are in doubt with quite a bit of knowledge and are guaranteed to do much better than trying to grasp the simplest concepts on the spot. This pre-computation step we only have to do once, and it almost doesn't matter how lengthy it is, because after that we can answer any question in this topic in the future. Well, sometimes passing exams is not as easy as described here, but a fair bit of pre-computation often goes a long way. That's saying, the authors have identified three major bottlenecks in already existing pre-computation techniques and proposed optimizations to speed them up considerably at the cost of higher memory consumption. For instance, the algorithm is trained on a relatively large set of training pose examples. If we don't have enough of these training examples, the quality of the animations will be unsatisfactory, but if we use too many, that's too resource intensive. We have to choose just the right amount and the right kinds of poses, which is a highly non-trivial process. Note that this training is not the same kind of training we are used to see with neural networks. This work doesn't have anything to do with neural networks at all. The results of the new technique are clearly very close to the results we would obtain with standard methods. However, the computation time is 20 to 2,000 times less. The more favorable cases computing deformations that would take several hours can take less than a second. That is one draw-dropping result and they have to value a proposition indeed. This example shows that after a short pre-computation step, we can start torturing this poor armadillo and expect high-quality elastic deformations. And there is a lot of other things to be learned from the paper. Framesmith or Thorganization, augmented Creelov iterations, Newton PCG solvers, essentially if you pick up a dry textbook on linear algebra and for every technique you see there, you ask what on earth this is useful for. You wouldn't have to go through hundreds of works. You would find a ton of answers in just this one absolutely amazing paper. Also, please don't forget that you fellow scholars make two minute papers happen. If you wish to support the show and get access to cool perks like an exclusive Early Access program where you can watch these episodes 16 to 24 hours in advance, check out our page on Patreon. Just click on the icon with the letter P at the end of this video or just have a look at the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.04, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejolene Fahir."}, {"start": 5.04, "end": 10.48, "text": " This piece of work is about reducing the time needed to simulate elastic deformations by"}, {"start": 10.48, "end": 12.48, "text": " means of pre-computation."}, {"start": 12.48, "end": 15.88, "text": " Okay, so what does the term pre-computation mean?"}, {"start": 15.88, "end": 21.84, "text": " If we are at an open book exam and we are short on time, which is basically every time,"}, {"start": 21.84, "end": 27.48, "text": " it would be much better to do a pre-computation step, namely studying at home for a few days"}, {"start": 27.48, "end": 32.96, "text": " before, and then, when we are there, we are in doubt with quite a bit of knowledge and"}, {"start": 32.96, "end": 38.92, "text": " are guaranteed to do much better than trying to grasp the simplest concepts on the spot."}, {"start": 38.92, "end": 43.92, "text": " This pre-computation step we only have to do once, and it almost doesn't matter how lengthy"}, {"start": 43.92, "end": 49.04, "text": " it is, because after that we can answer any question in this topic in the future."}, {"start": 49.04, "end": 55.08, "text": " Well, sometimes passing exams is not as easy as described here, but a fair bit of pre-computation"}, {"start": 55.08, "end": 56.84, "text": " often goes a long way."}, {"start": 56.84, "end": 61.88, "text": " That's saying, the authors have identified three major bottlenecks in already existing"}, {"start": 61.88, "end": 67.60000000000001, "text": " pre-computation techniques and proposed optimizations to speed them up considerably at the cost"}, {"start": 67.60000000000001, "end": 69.52000000000001, "text": " of higher memory consumption."}, {"start": 69.52000000000001, "end": 75.28, "text": " For instance, the algorithm is trained on a relatively large set of training pose examples."}, {"start": 75.28, "end": 79.28, "text": " If we don't have enough of these training examples, the quality of the animations will"}, {"start": 79.28, "end": 84.24000000000001, "text": " be unsatisfactory, but if we use too many, that's too resource intensive."}, {"start": 84.24, "end": 88.64, "text": " We have to choose just the right amount and the right kinds of poses, which is a highly"}, {"start": 88.64, "end": 90.6, "text": " non-trivial process."}, {"start": 90.6, "end": 95.19999999999999, "text": " Note that this training is not the same kind of training we are used to see with neural"}, {"start": 95.19999999999999, "end": 96.19999999999999, "text": " networks."}, {"start": 96.19999999999999, "end": 98.88, "text": " This work doesn't have anything to do with neural networks at all."}, {"start": 98.88, "end": 103.6, "text": " The results of the new technique are clearly very close to the results we would obtain with"}, {"start": 103.6, "end": 104.88, "text": " standard methods."}, {"start": 104.88, "end": 109.56, "text": " However, the computation time is 20 to 2,000 times less."}, {"start": 109.56, "end": 115.28, "text": " The more favorable cases computing deformations that would take several hours can take less"}, {"start": 115.28, "end": 116.8, "text": " than a second."}, {"start": 116.8, "end": 120.96000000000001, "text": " That is one draw-dropping result and they have to value a proposition indeed."}, {"start": 120.96000000000001, "end": 126.80000000000001, "text": " This example shows that after a short pre-computation step, we can start torturing this poor armadillo"}, {"start": 126.80000000000001, "end": 130.24, "text": " and expect high-quality elastic deformations."}, {"start": 130.24, "end": 133.52, "text": " And there is a lot of other things to be learned from the paper."}, {"start": 133.52, "end": 139.68, "text": " Framesmith or Thorganization, augmented Creelov iterations, Newton PCG solvers, essentially"}, {"start": 139.68, "end": 144.72, "text": " if you pick up a dry textbook on linear algebra and for every technique you see there, you"}, {"start": 144.72, "end": 147.56, "text": " ask what on earth this is useful for."}, {"start": 147.56, "end": 149.84, "text": " You wouldn't have to go through hundreds of works."}, {"start": 149.84, "end": 155.48000000000002, "text": " You would find a ton of answers in just this one absolutely amazing paper."}, {"start": 155.48000000000002, "end": 159.88, "text": " Also, please don't forget that you fellow scholars make two minute papers happen."}, {"start": 159.88, "end": 165.2, "text": " If you wish to support the show and get access to cool perks like an exclusive Early Access"}, {"start": 165.2, "end": 170.84, "text": " program where you can watch these episodes 16 to 24 hours in advance, check out our page"}, {"start": 170.84, "end": 172.0, "text": " on Patreon."}, {"start": 172.0, "end": 176.44, "text": " Just click on the icon with the letter P at the end of this video or just have a look at"}, {"start": 176.44, "end": 177.84, "text": " the video description."}, {"start": 177.84, "end": 196.8, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=DzsZ2qMtEUE
Sound Propagation With Bidirectional Path Tracing | Two Minute Papers #111
The paper "Interactive Sound Propagation with Bidirectional Path Tracing" is available here: http://gaps-zju.org/bst/ Veach's paper on Multiple Importance Sampling: http://www.cs.jhu.edu/~misha/ReadingSeminar/Papers/Veach95.pdf http://dl.acm.org/citation.cfm?id=218498 I am also holding a full course on light transport simulations at the Technical University of Vienna. There is plenty of discussion on path tracing and bidirectional path tracing therein: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image is courtesy of Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Dubrovnik,_palazzo_sponza,_cortile_02.JPG Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifeher. Imagine if we had an accurate algorithm to simulate how different sound effects would propagate in a virtual world. We would find computer games exhibiting gunfire in open areas or a pianist inside a castle courtyard to be way more immersive, and we've been waiting for efficient techniques for this for quite a while now. This is a research field where convolutions enjoy quite a bit of attention due to the fact that they are irreliable and efficient way to approximate how a given signal would sound in a room with given geometry and material properties. However, the keyword is approximate. This, however, is one of those past sampling techniques that gives us the real deal, so quite excited for that. So what about this path sampling thing? This means an actual simulation of sound waves. We have a vast literature and decades of experience in simulating how rays of light, bounds, and reflect around in a scene and leaning on this knowledge, we can create beautiful photorealistic images. The first idea is to adapt the mathematical framework of light simulations to be able to do the very same with sound waves. Past tracing is a technique where we build light paths from the camera, bounce them around in a scene and hope that we hit a light source with these rays. If this happens, then we compute the amount of energy that is transferred from the light source to the camera. Note that energy is a more popular and journalistic term here, what researchers actually measure here is a quantity called radiance. The main contribution of this work is adapting by direction of past tracing to sound. This is a technique originally designed for light simulations that builds light paths from both the light source and the camera at the same time. And it is significantly more efficient than the classical pastracer on difficult indoor scenes. And of course, the main issue with these methods is that they have to simulate a large number of rays to obtain a satisfactory result, and many of these rays don't contribute anything to the final result, only a small subset of them are responsible for most of the image we see or sound we hear. It is a bit like the Pareto principle or the 80-20 rule on steroids. In fact, in which infinitely many points can be written in the other solstice of the rest of it. In a way that is a technique, infinitely many of us can need to produce a few digits with a half of the simple zero, the principle of position under the concept of base. Pure systems with very small and fixed, essentially very rare, but very spent here as an English, where we use score as in false score and as standard. This is ice cream for my ears. Love it. This work also introduces a metric to not only be able to compare similar sound synthesis techniques in the future, but the proposed technique is built around minimizing this metric, which leads us to an idea on which rays carry important information, and which ones we are better off discarding. I also like this minimap on the upper left that actually shows what we hear in this footage exactly where the sound sources are and how they change their positions, looking forward to seeing and listening to similar presentations in future papers in this area. Typical number for the execution time of the algorithm is between 15 to 20 milliseconds per frame on a consumer grade processor. That is about 50 to 65 frames per second. The position of the sound sources makes a great deal of difference for the classical path tracer. The bi-directional path tracer, however, is not only more effective, but offers significantly more consistent results as well. This new method is especially useful in these cases. There are way more details explained in the paper. For instance, it also supports path caching and also borrows the all-powerful, multiple-important sampling from photorealistic rendering research. Have a look. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifeher."}, {"start": 4.64, "end": 11.48, "text": " Imagine if we had an accurate algorithm to simulate how different sound effects would propagate in a virtual world."}, {"start": 11.48, "end": 19.96, "text": " We would find computer games exhibiting gunfire in open areas or a pianist inside a castle courtyard to be way more immersive,"}, {"start": 19.96, "end": 23.6, "text": " and we've been waiting for efficient techniques for this for quite a while now."}, {"start": 23.6, "end": 37.760000000000005, "text": " This is a research field where convolutions enjoy quite a bit of attention due to the fact that they are irreliable and efficient way to approximate how a given signal would sound in a room with given geometry and material properties."}, {"start": 37.760000000000005, "end": 40.400000000000006, "text": " However, the keyword is approximate."}, {"start": 40.400000000000006, "end": 47.08, "text": " This, however, is one of those past sampling techniques that gives us the real deal, so quite excited for that."}, {"start": 47.08, "end": 49.32, "text": " So what about this path sampling thing?"}, {"start": 49.32, "end": 52.68000000000001, "text": " This means an actual simulation of sound waves."}, {"start": 52.68, "end": 65.96000000000001, "text": " We have a vast literature and decades of experience in simulating how rays of light, bounds, and reflect around in a scene and leaning on this knowledge, we can create beautiful photorealistic images."}, {"start": 65.96000000000001, "end": 74.12, "text": " The first idea is to adapt the mathematical framework of light simulations to be able to do the very same with sound waves."}, {"start": 74.12, "end": 82.36, "text": " Past tracing is a technique where we build light paths from the camera, bounce them around in a scene and hope that we hit a light source with these rays."}, {"start": 82.36, "end": 88.36, "text": " If this happens, then we compute the amount of energy that is transferred from the light source to the camera."}, {"start": 88.36, "end": 96.84, "text": " Note that energy is a more popular and journalistic term here, what researchers actually measure here is a quantity called radiance."}, {"start": 96.84, "end": 102.2, "text": " The main contribution of this work is adapting by direction of past tracing to sound."}, {"start": 102.2, "end": 111.0, "text": " This is a technique originally designed for light simulations that builds light paths from both the light source and the camera at the same time."}, {"start": 111.0, "end": 116.68, "text": " And it is significantly more efficient than the classical pastracer on difficult indoor scenes."}, {"start": 116.68, "end": 124.44, "text": " And of course, the main issue with these methods is that they have to simulate a large number of rays to obtain a satisfactory result,"}, {"start": 124.44, "end": 134.52, "text": " and many of these rays don't contribute anything to the final result, only a small subset of them are responsible for most of the image we see or sound we hear."}, {"start": 134.52, "end": 142.20000000000002, "text": " It is a bit like the Pareto principle or the 80-20 rule on steroids."}, {"start": 164.52, "end": 183.16000000000003, "text": " In fact, in which infinitely many points can be written in the other solstice of the rest of it."}, {"start": 183.16000000000003, "end": 190.76000000000002, "text": " In a way that is a technique, infinitely many of us can need to produce a few digits with a half of the simple zero,"}, {"start": 190.76, "end": 197.39999999999998, "text": " the principle of position under the concept of base. Pure systems with very small and fixed,"}, {"start": 197.39999999999998, "end": 204.92, "text": " essentially very rare, but very spent here as an English, where we use score as in false score and as standard."}, {"start": 210.28, "end": 213.56, "text": " This is ice cream for my ears. Love it."}, {"start": 213.56, "end": 220.68, "text": " This work also introduces a metric to not only be able to compare similar sound synthesis techniques in the future,"}, {"start": 220.68, "end": 228.36, "text": " but the proposed technique is built around minimizing this metric, which leads us to an idea on which rays carry important information,"}, {"start": 228.36, "end": 231.0, "text": " and which ones we are better off discarding."}, {"start": 231.0, "end": 240.60000000000002, "text": " I also like this minimap on the upper left that actually shows what we hear in this footage exactly where the sound sources are and how they change their positions,"}, {"start": 240.60000000000002, "end": 246.20000000000002, "text": " looking forward to seeing and listening to similar presentations in future papers in this area."}, {"start": 246.2, "end": 254.04, "text": " Typical number for the execution time of the algorithm is between 15 to 20 milliseconds per frame on a consumer grade processor."}, {"start": 254.04, "end": 257.47999999999996, "text": " That is about 50 to 65 frames per second."}, {"start": 257.47999999999996, "end": 262.59999999999997, "text": " The position of the sound sources makes a great deal of difference for the classical path tracer."}, {"start": 262.59999999999997, "end": 270.2, "text": " The bi-directional path tracer, however, is not only more effective, but offers significantly more consistent results as well."}, {"start": 270.2, "end": 273.88, "text": " This new method is especially useful in these cases."}, {"start": 273.88, "end": 276.68, "text": " There are way more details explained in the paper."}, {"start": 276.68, "end": 281.56, "text": " For instance, it also supports path caching and also borrows the all-powerful,"}, {"start": 281.56, "end": 285.32, "text": " multiple-important sampling from photorealistic rendering research."}, {"start": 285.32, "end": 306.36, "text": " Have a look. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=FeMSEaHR8aw
Water Wave Simulation with Dispersion Kernels | Two Minute Papers #110
The paper "Dispersion Kernels for Water Wave Simulation" is available here: http://www.gmrv.es/Publications/2016/CMTKPO16/ Recommended for you: Rocking Out With Convolutions - https://www.youtube.com/watch?v=JKYQOAZRZu4 Separable Subsurface Scattering - https://www.youtube.com/watch?v=72_iAlYwl0c&t=1s WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image is taken from the corresponding paper (link available above). Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Károly Zsolnai-Fehér. In this piece of work, we are interested in simulating the dynamics of water waves. There are quite a few forces acting on a bucket of water, such as surface tension, internal pressure, external force fields, such as wind for instance, and gravity. Therefore, it is not a surprise that these waves can become quite complex with a lot of high frequency details that are difficult to simulate. Accurately modeling wave reflections after colliding with solids is also an important and highly sought after detail to capture. This piece of work simulates Sir George Biddle-Aries dispersion model. Now, what does this mean exactly? The Aerie model describes many common wave phenomena accurately, such as how longer waves are dominated by gravitational forces and how shorter waves tend mostly according to the will of surface tension. However, as amazing this theory is, it does not formulate these quantities in a way that would be directly applicable to a computer simulation. The main contribution of this paper is a new convolution formulation of this model and some more optimizations that can be directly added into a simulation and not only that, but the resulting algorithm parallelizes and maps well to the graphical card in our computers. We have earlier discussed what a convolution is. Essentially, it is a mathematical operation that can add reverberation to the sound of our guitar or accurately simulate how light bounces around under our skin. Links to these episodes are available in the video description and at the end of the video, check them out. I'm sure you'll have a lot of fun with them. Regarding applications, as the technique obeys Aerie's classical dispersion model, I expect and hope this to be useful for ocean and coastal engineering and in simulating huge tidal waves. Note that limitations apply, for instance, the original linear theory is mostly good for shallow water simulations and larger waves in deeper waters. The proposed approximation itself also has inherent limitations, such as the possibility of waves going through thinner objects. The resulting algorithm is, however, very accurate and honestly, a joy to watch. It is also shown to support larger scale scenes. Here you see how beautifully it can simulate the capillary waves produced by these raindrops and of course, the waves around the swans in the pond. This example took roughly one and a half seconds per frame to compute. You know the drill, a couple more follow-up papers down the line and it will surely run in real time. Can't wait. Also, please let me know in the comments section whether you have found this episode understandable. Was it easy to follow? Too much? Your feedback is, as always, highly appreciated. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.9, "text": " Dear Fellow Scholars, this is two minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.9, "end": 10.3, "text": " In this piece of work, we are interested in simulating the dynamics of water waves."}, {"start": 10.3, "end": 15.700000000000001, "text": " There are quite a few forces acting on a bucket of water, such as surface tension, internal"}, {"start": 15.700000000000001, "end": 20.900000000000002, "text": " pressure, external force fields, such as wind for instance, and gravity."}, {"start": 20.900000000000002, "end": 25.86, "text": " Therefore, it is not a surprise that these waves can become quite complex with a lot of"}, {"start": 25.86, "end": 29.1, "text": " high frequency details that are difficult to simulate."}, {"start": 29.1, "end": 34.260000000000005, "text": " Accurately modeling wave reflections after colliding with solids is also an important"}, {"start": 34.260000000000005, "end": 37.14, "text": " and highly sought after detail to capture."}, {"start": 37.14, "end": 41.58, "text": " This piece of work simulates Sir George Biddle-Aries dispersion model."}, {"start": 41.58, "end": 44.1, "text": " Now, what does this mean exactly?"}, {"start": 44.1, "end": 49.540000000000006, "text": " The Aerie model describes many common wave phenomena accurately, such as how longer waves"}, {"start": 49.540000000000006, "end": 54.900000000000006, "text": " are dominated by gravitational forces and how shorter waves tend mostly according to"}, {"start": 54.900000000000006, "end": 57.06, "text": " the will of surface tension."}, {"start": 57.06, "end": 62.06, "text": " However, as amazing this theory is, it does not formulate these quantities in a way that"}, {"start": 62.06, "end": 65.18, "text": " would be directly applicable to a computer simulation."}, {"start": 65.18, "end": 69.86, "text": " The main contribution of this paper is a new convolution formulation of this model and"}, {"start": 69.86, "end": 75.30000000000001, "text": " some more optimizations that can be directly added into a simulation and not only that,"}, {"start": 75.30000000000001, "end": 81.18, "text": " but the resulting algorithm parallelizes and maps well to the graphical card in our computers."}, {"start": 81.18, "end": 84.14, "text": " We have earlier discussed what a convolution is."}, {"start": 84.14, "end": 88.98, "text": " Essentially, it is a mathematical operation that can add reverberation to the sound of"}, {"start": 88.98, "end": 94.38, "text": " our guitar or accurately simulate how light bounces around under our skin."}, {"start": 94.38, "end": 98.06, "text": " Links to these episodes are available in the video description and at the end of the"}, {"start": 98.06, "end": 99.38, "text": " video, check them out."}, {"start": 99.38, "end": 102.1, "text": " I'm sure you'll have a lot of fun with them."}, {"start": 102.1, "end": 106.74000000000001, "text": " Regarding applications, as the technique obeys Aerie's classical dispersion model,"}, {"start": 106.74000000000001, "end": 111.9, "text": " I expect and hope this to be useful for ocean and coastal engineering and in simulating"}, {"start": 111.9, "end": 113.74000000000001, "text": " huge tidal waves."}, {"start": 113.74, "end": 118.61999999999999, "text": " Note that limitations apply, for instance, the original linear theory is mostly good for"}, {"start": 118.61999999999999, "end": 122.86, "text": " shallow water simulations and larger waves in deeper waters."}, {"start": 122.86, "end": 128.5, "text": " The proposed approximation itself also has inherent limitations, such as the possibility of waves"}, {"start": 128.5, "end": 130.82, "text": " going through thinner objects."}, {"start": 130.82, "end": 136.14, "text": " The resulting algorithm is, however, very accurate and honestly, a joy to watch."}, {"start": 136.14, "end": 139.7, "text": " It is also shown to support larger scale scenes."}, {"start": 139.7, "end": 145.1, "text": " Here you see how beautifully it can simulate the capillary waves produced by these raindrops"}, {"start": 145.1, "end": 148.42, "text": " and of course, the waves around the swans in the pond."}, {"start": 148.42, "end": 152.33999999999997, "text": " This example took roughly one and a half seconds per frame to compute."}, {"start": 152.33999999999997, "end": 156.45999999999998, "text": " You know the drill, a couple more follow-up papers down the line and it will surely run"}, {"start": 156.45999999999998, "end": 157.45999999999998, "text": " in real time."}, {"start": 157.45999999999998, "end": 158.78, "text": " Can't wait."}, {"start": 158.78, "end": 163.98, "text": " Also, please let me know in the comments section whether you have found this episode understandable."}, {"start": 163.98, "end": 165.45999999999998, "text": " Was it easy to follow?"}, {"start": 165.45999999999998, "end": 166.45999999999998, "text": " Too much?"}, {"start": 166.45999999999998, "end": 169.45999999999998, "text": " Your feedback is, as always, highly appreciated."}, {"start": 169.46, "end": 173.26000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7JbN9vXxGYE
3D Printing Acoustic Filters | Two Minute Papers #109
The paper "Acoustic Voxels: Computational Optimization of Modular Acoustic Filters" is available here: http://www.cs.columbia.edu/cg/lego/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoizhola Ifehir. What is an acoustic filter? Well, it is an arbitrarily shaped object that takes a sound as an input and outputs a different sound. These filters have some really amazing applications that you'll hear about in a minute. In this work, a novel technique is proposed to automatically design such filters. It works by building an arbitrarily shaped object as a set of connected tiny resonators and chooses appropriate sizes and setups for each of these elements to satisfy a prescribed set of acoustic properties. Instead of resorting to a lengthy and flimsy trial and error phase, we can use physics to simulate what would happen if we were to use a given arrangement in reality. Again, one of those works that have a deep connection to the real world around us. Absolutely amazing! The goal can be to eliminate the peaks of the sound of a car horn or an airplane engine, and we can achieve this objective by means of optimization. The proposed applications we can divide into three main categories. The first is identifying and filtering noise attenuation components for a prescribed application to which we can also refer to as muffler design. In simpler words, we are interested in filtering or muffling the peaks of a known signal. Designing such objects is typically up to trial and error, and in this case, it is even harder because we are interested in a wider variety of shape choices other than exhaust pipes and tubes that are typically used in the industry. With this unoptimized muffler, the three peaks are not suppressed. Now we replace the muffler with an optimized one. The three peaks are more suppressed. Another application from muffler designs acoustic earmuffs. We may adapt this to a lot of switching between acoustic filters. The first pair of earmuffs is optimized reduced engine crank noise. We can see a three peaks are now suppressed to a lower sound mode. Second, designing musical instruments is hard, and unless we design them around achieving a given acoustic response, we'll likely end up with in harmonious gibberish. And this method also supports designing musical instruments with... Hmm... well, non-conventional shapes. Well, this is as non-conventional as it gets I'm afraid. And also, that is about the most harmonious sound I've heard coming out of the rear end of a hippo. And third, this work opens up the possibility of making hollow objects that are easy to identify by means of acoustic tagging. Check out this awesome example that involves smacking these 3D printed piggies. Our acoustic filter design also opens up possibilities for a few new applications. In this acoustic tagging example, we optimize the tapping sound of three piggies to three sets of frequencies. The taps each piggie filter sounds differently, so that our iPhone apted the text to identify each of them. To go one step further, we demonstrate the ability to encode a bit-string pattern. Making it used to the fact that the iPhone is both a speaker and microphone at the bottom, the modulate the input white noise to encode different pig patterns. If you feel like improving your kung fu in math, there are tons of goodies such as transmission matrices, the Helmholtz equation, oh my! The paper and the talk slides are amazingly well written, and yes, you should definitely have a look at them. Let us know in the comments section if you have some ideas for possible applications beyond these ones, we'd love to read your take on these works. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Karoizhola Ifehir."}, {"start": 4.72, "end": 6.88, "text": " What is an acoustic filter?"}, {"start": 6.88, "end": 14.36, "text": " Well, it is an arbitrarily shaped object that takes a sound as an input and outputs a different sound."}, {"start": 14.36, "end": 18.88, "text": " These filters have some really amazing applications that you'll hear about in a minute."}, {"start": 18.88, "end": 24.080000000000002, "text": " In this work, a novel technique is proposed to automatically design such filters."}, {"start": 24.08, "end": 30.24, "text": " It works by building an arbitrarily shaped object as a set of connected tiny resonators"}, {"start": 30.24, "end": 38.4, "text": " and chooses appropriate sizes and setups for each of these elements to satisfy a prescribed set of acoustic properties."}, {"start": 38.4, "end": 42.32, "text": " Instead of resorting to a lengthy and flimsy trial and error phase,"}, {"start": 42.32, "end": 48.879999999999995, "text": " we can use physics to simulate what would happen if we were to use a given arrangement in reality."}, {"start": 48.879999999999995, "end": 53.68, "text": " Again, one of those works that have a deep connection to the real world around us."}, {"start": 53.68, "end": 55.6, "text": " Absolutely amazing!"}, {"start": 55.6, "end": 61.28, "text": " The goal can be to eliminate the peaks of the sound of a car horn or an airplane engine,"}, {"start": 61.28, "end": 65.03999999999999, "text": " and we can achieve this objective by means of optimization."}, {"start": 65.03999999999999, "end": 69.12, "text": " The proposed applications we can divide into three main categories."}, {"start": 69.12, "end": 75.6, "text": " The first is identifying and filtering noise attenuation components for a prescribed application"}, {"start": 75.6, "end": 78.64, "text": " to which we can also refer to as muffler design."}, {"start": 78.64, "end": 84.16, "text": " In simpler words, we are interested in filtering or muffling the peaks of a known signal."}, {"start": 84.16, "end": 88.48, "text": " Designing such objects is typically up to trial and error, and in this case,"}, {"start": 88.48, "end": 93.76, "text": " it is even harder because we are interested in a wider variety of shape choices"}, {"start": 93.76, "end": 111.12, "text": " other than exhaust pipes and tubes that are typically used in the industry."}, {"start": 111.12, "end": 114.48, "text": " With this unoptimized muffler, the three peaks are not suppressed."}, {"start": 114.48, "end": 124.08, "text": " Now we replace the muffler with an optimized one."}, {"start": 124.08, "end": 126.08, "text": " The three peaks are more suppressed."}, {"start": 126.08, "end": 129.36, "text": " Another application from muffler designs acoustic earmuffs."}, {"start": 129.36, "end": 132.72, "text": " We may adapt this to a lot of switching between acoustic filters."}, {"start": 132.72, "end": 144.48, "text": " The first pair of earmuffs is optimized reduced engine crank noise."}, {"start": 144.48, "end": 148.32, "text": " We can see a three peaks are now suppressed to a lower sound mode."}, {"start": 148.32, "end": 154.16, "text": " Second, designing musical instruments is hard, and unless we design them around achieving a given"}, {"start": 154.16, "end": 158.56, "text": " acoustic response, we'll likely end up with in harmonious gibberish."}, {"start": 158.56, "end": 163.12, "text": " And this method also supports designing musical instruments with..."}, {"start": 163.12, "end": 167.44, "text": " Hmm... well, non-conventional shapes."}, {"start": 167.44, "end": 189.68, "text": " Well, this is as non-conventional as it gets I'm afraid."}, {"start": 189.68, "end": 195.92, "text": " And also, that is about the most harmonious sound I've heard coming out of the rear end of a hippo."}, {"start": 195.92, "end": 202.23999999999998, "text": " And third, this work opens up the possibility of making hollow objects that are easy to identify"}, {"start": 202.23999999999998, "end": 204.39999999999998, "text": " by means of acoustic tagging."}, {"start": 204.39999999999998, "end": 209.2, "text": " Check out this awesome example that involves smacking these 3D printed piggies."}, {"start": 209.2, "end": 213.6, "text": " Our acoustic filter design also opens up possibilities for a few new applications."}, {"start": 213.6, "end": 218.95999999999998, "text": " In this acoustic tagging example, we optimize the tapping sound of three piggies to three sets of frequencies."}, {"start": 218.96, "end": 234.24, "text": " The taps each piggie filter sounds differently, so that our iPhone apted the text to identify each of them."}, {"start": 234.24, "end": 238.08, "text": " To go one step further, we demonstrate the ability to encode a bit-string pattern."}, {"start": 238.08, "end": 242.0, "text": " Making it used to the fact that the iPhone is both a speaker and microphone at the bottom,"}, {"start": 242.0, "end": 252.0, "text": " the modulate the input white noise to encode different pig patterns."}, {"start": 260.96, "end": 266.0, "text": " If you feel like improving your kung fu in math, there are tons of goodies such as transmission"}, {"start": 266.0, "end": 269.36, "text": " matrices, the Helmholtz equation, oh my!"}, {"start": 269.36, "end": 275.2, "text": " The paper and the talk slides are amazingly well written, and yes, you should definitely have a look at them."}, {"start": 275.2, "end": 279.68, "text": " Let us know in the comments section if you have some ideas for possible applications beyond"}, {"start": 279.68, "end": 282.48, "text": " these ones, we'd love to read your take on these works."}, {"start": 282.48, "end": 302.8, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=aMo7pkkaZ9o
Synchronizing Animations To Sound | Two Minute Papers #108
The paper "Inverse-Foley Animation: Synchronizing rigid-body motions to sound" is available here: http://www.cs.cornell.edu/projects/Sound/ifa/ Recommended for you: Sound Synthesis for Fluids With Bubbles - https://www.youtube.com/watch?v=kwqme8mEgz4 Synthesizing Sound From Collisions - https://www.youtube.com/watch?v=rskdLEl05KI Visually Indicated Sounds - https://www.youtube.com/watch?v=flOevlA9RyQ What Do Virtual Objects Sound Like? - https://www.youtube.com/watch?v=ZaFqvM1IsP8 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credits: https://commons.wikimedia.org/wiki/File:Spinning_top_(5448672388).jpg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir. This is going to be absolutely amazing. Earlier, we had some delightful discussions on synthesizing sound from animations. The input would be a sequence, for instance, a video depicting the complete and other destruction of plates, wooden bunnies, or footage of bubbling water. And the output should be a physical simulation that yields appropriate sound effects for the observed phenomenon. In short, input, animation, output, synthesize sound effects for this animation. And get this! What if we would turn the problem around where we have sound as an input? And we try to synthesize an animation that could create such a sound? Hmm, I like it, a very spicy project indeed. And however crazy the idea may sound, given the richness of sound effects in nature, it may actually be easier to generate a believable animation than a perfect sound effect. It is extremely difficult to be able to match the amazingly detailed real-world sound of, for instance, a sliding, rolling, or bouncing bolt with a simulation. The more I think about this, the more I realize that this direction actually makes perfect sense. And no machine learning is used here if we look under the hood, we'll see a pre-generated database of rigid body simulations with dozens of different objects, and a big graph that tries to group different events and motions together, and encode the order of execution of these events and motions. Now hold on to your papers and let's check out the first round of results together. I think it would be an understatement to say that they nailed it. And what's more, we can also add additional constraints like a prescribed landing location to the object to make sure that the animations are not too arbitrary, but are more in line with our artistic vision. Crazy! As I'm looking through the results, I am still in complete disbelief. This shouldn't be possible. Also, please don't get the impression that this is all there is to this technique. There are a lot more important details that we haven't discussed here that the more curious fellow scholars could be interested in. Discrete and continuous time contact events, time warping, and motion connections. There are tons of goodies like these in the paper, please have a look to be able to better gauge and appreciate the merits of this work. Some limitations apply, such as the environment is constrained to be this plane that we've seen in these animations. And as always, with works that are inventing something completely new, it currently takes several minutes, which is not too bad, but of course there's plenty of room to accelerate the execution times. And some bonus footage. Blue dice are synchronized with high head hits, and the red dice are synchronized with the bass and snare. This will be just spectacular for creating music videos, animated movies, and I'm convinced that professional artists will be able to do incredible things with such a tool. Thanks for watching, and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Efehir."}, {"start": 4.64, "end": 7.6000000000000005, "text": " This is going to be absolutely amazing."}, {"start": 7.6000000000000005, "end": 12.92, "text": " Earlier, we had some delightful discussions on synthesizing sound from animations."}, {"start": 12.92, "end": 22.92, "text": " The input would be a sequence, for instance, a video depicting the complete and other destruction of plates, wooden bunnies, or footage of bubbling water."}, {"start": 22.92, "end": 29.36, "text": " And the output should be a physical simulation that yields appropriate sound effects for the observed phenomenon."}, {"start": 29.36, "end": 35.24, "text": " In short, input, animation, output, synthesize sound effects for this animation."}, {"start": 35.24, "end": 41.28, "text": " And get this! What if we would turn the problem around where we have sound as an input?"}, {"start": 41.28, "end": 45.8, "text": " And we try to synthesize an animation that could create such a sound?"}, {"start": 45.8, "end": 49.68, "text": " Hmm, I like it, a very spicy project indeed."}, {"start": 49.68, "end": 54.760000000000005, "text": " And however crazy the idea may sound, given the richness of sound effects in nature,"}, {"start": 54.76, "end": 60.8, "text": " it may actually be easier to generate a believable animation than a perfect sound effect."}, {"start": 60.8, "end": 71.44, "text": " It is extremely difficult to be able to match the amazingly detailed real-world sound of, for instance, a sliding, rolling, or bouncing bolt with a simulation."}, {"start": 71.44, "end": 77.24, "text": " The more I think about this, the more I realize that this direction actually makes perfect sense."}, {"start": 77.24, "end": 87.03999999999999, "text": " And no machine learning is used here if we look under the hood, we'll see a pre-generated database of rigid body simulations with dozens of different objects,"}, {"start": 87.03999999999999, "end": 95.24, "text": " and a big graph that tries to group different events and motions together, and encode the order of execution of these events and motions."}, {"start": 95.24, "end": 107.72, "text": " Now hold on to your papers and let's check out the first round of results together."}, {"start": 155.24, "end": 159.04000000000002, "text": " I think it would be an understatement to say that they nailed it."}, {"start": 159.04000000000002, "end": 170.92000000000002, "text": " And what's more, we can also add additional constraints like a prescribed landing location to the object to make sure that the animations are not too arbitrary, but are more in line with our artistic vision."}, {"start": 170.92000000000002, "end": 178.12, "text": " Crazy! As I'm looking through the results, I am still in complete disbelief. This shouldn't be possible."}, {"start": 178.12, "end": 189.16, "text": " Also, please don't get the impression that this is all there is to this technique. There are a lot more important details that we haven't discussed here that the more curious fellow scholars could be interested in."}, {"start": 189.16, "end": 194.76, "text": " Discrete and continuous time contact events, time warping, and motion connections."}, {"start": 194.76, "end": 201.96, "text": " There are tons of goodies like these in the paper, please have a look to be able to better gauge and appreciate the merits of this work."}, {"start": 201.96, "end": 208.52, "text": " Some limitations apply, such as the environment is constrained to be this plane that we've seen in these animations."}, {"start": 208.52, "end": 220.52, "text": " And as always, with works that are inventing something completely new, it currently takes several minutes, which is not too bad, but of course there's plenty of room to accelerate the execution times."}, {"start": 220.52, "end": 222.12, "text": " And some bonus footage."}, {"start": 222.12, "end": 235.8, "text": " Blue dice are synchronized with high head hits, and the red dice are synchronized with the bass and snare."}, {"start": 235.8, "end": 246.44, "text": " This will be just spectacular for creating music videos, animated movies, and I'm convinced that professional artists will be able to do incredible things with such a tool."}, {"start": 246.44, "end": 252.6, "text": " Thanks for watching, and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=4MfG9CDufPA
Deep Learning Program Simplifies Your Drawings | Two Minute Papers #107
The Ishikawa Watanabe Laboratory, the University of Tokyo laboratory has all rights to the materials shown in the video. The paper "Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup" and its online demo is available here: http://hi.cs.waseda.ac.jp/~esimo/en/research/sketch/ http://hi.cs.waseda.ac.jp:8081/ Recommended for you: Rocking Out With Convolutions - https://www.youtube.com/watch?v=JKYQOAZRZu4 Separable Subsurface Scattering - https://www.youtube.com/watch?v=72_iAlYwl0c WaveNet by Google DeepMind - https://www.youtube.com/watch?v=CqFIVCD1WWo WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Image credits: Bitmap and vector images (two of them): Wikipedia - https://en.wikipedia.org/wiki/Vector_graphics and https://en.wikipedia.org/wiki/Image_tracing Image resolution: Wikipedia - https://en.wikipedia.org/wiki/Image_resolution Vectorization: Wikipedia - https://en.wikipedia.org/wiki/Image_tracing Thumbnail background - https://pixabay.com/photo-1281718/ Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejolna Ifehir. First, let's talk about RESTOR and Vector Graphics. What do these terms mean exactly? A RESTOR image is a grid made up of pixels, and for each of these pixels we specify a color. That's all there is in an image. It is nothing but a collection of pixels. All photographs on your phone and generally most images you encounter are RESTOR images. It is easy to see that the quality of such images greatly depends on the resolution of this grid. Of course, the more grid points, the finer the grid is, the more details we can see. However, in return, if we disregard compression techniques, the file size grows proportionally to the number of pixels, and if we zoom in too close, we shall witness these classic staircase effects that we like to call aliasing. However, if we are designing a website or a logo for a company which should look sharp on all possible devices and zoom levels, Vector Graphics is a useful alternative. Vector images are inherently different from RESTOR images as the base elements of the image are not pixels, but vectors and control points. The difference is like storing the shape of a circle on a lot of pixels point by point, which would be a RESTOR image, or just saying that I want to circle on these coordinates with a given radius. And as you can see in this example, the point of this is to have razor sharp images at higher zoom levels as well. Unless we go too crazy with defined details, file sizes are also often remarkably small for vector images because we are not storing the colors of millions of pixels. We are only storing shapes. If we want to sound a bit more journalistic, we can kind of say that vector images have infinite resolution. We can zoom in as much as we wish and we won't lose any detail during this process. Vectorization is the process where we try to convert a RESTOR image to a vector image. Some also like to call this process image tracing. The immediate question arises, why are we not using vector graphics everywhere? Well, one, the smoother the color transitions and the more detail we have in our images, the quicker the advantage of vectorization evaporates. And two, also note that this procedure is not trivial and we are also often at the mercy of the vectorization algorithm in terms of output quality. It is often unclear in advance whether it will work well on a given input. So now we know everything we need to know to be able to understand and appreciate this amazing piece of work. The input is a rough sketch that is a RESTOR image and the output is a simplified, cleaned up and vectorized version of it. We are not only doing vectorization but simplification as well. This is a game changer because this way we can lean on the additional knowledge that these input RESTOR images are sketches, hand drawn images. Therefore there is a lot of extra fluff in them that would be undesirable to retain in the vectorized output. Therefore the name sketch simplification. In each of these cases it is absolute insanity how well it works. Just look at these results. The next question is obviously how does this wizardry happen? It happens by using a classic deep learning technique, a convolutional neural network, of course that was trained on a large number of input and output pairs. However this is no ordinary convolutional neural network. This particular variant differs from the standard well known architecture as it is augmented with a series of upsampling convolution steps. Alternatively this algorithm learns a sparse and concise representation of these input sketches. This means that it focuses on the most defining features and throws away all the unneeded fluff. And the upsampling convolution steps make it able to not only understand that synthesize new, simplified and high resolution images that we can easily vectorize using standard algorithms. It is fully automatic and requires no user intervention. In case you are scratching your head about these convolutions, we have had plenty of discussions about this peculiar term before. I have linked the appropriate episodes in the video description box. I think you'll find them a lot of fun. In one of them I pulled out a guitar and added reverberation to it using convolution. It is clear that there is a ton of untapped potential in using different convolution variations in deep neural networks. We have seen in a deep mind paper earlier that used dilated convolutions for state of the art speech synthesis that is a novel convolution variant and this piece of work is no exception either. There is also a cool online demo of this technique that anyone can try. Make sure to post your results in the comment section. We'd love to have a look at your findings. Also, have a look at these two-minute papers fan art. A nice little logo, one of our kind fellow scholars sent in. It's really great to see that you have taken your time to help out the series. That's very kind of you. Thank you. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejolna Ifehir."}, {"start": 4.8, "end": 8.68, "text": " First, let's talk about RESTOR and Vector Graphics."}, {"start": 8.68, "end": 10.6, "text": " What do these terms mean exactly?"}, {"start": 10.6, "end": 16.52, "text": " A RESTOR image is a grid made up of pixels, and for each of these pixels we specify a"}, {"start": 16.52, "end": 17.52, "text": " color."}, {"start": 17.52, "end": 19.04, "text": " That's all there is in an image."}, {"start": 19.04, "end": 21.6, "text": " It is nothing but a collection of pixels."}, {"start": 21.6, "end": 26.88, "text": " All photographs on your phone and generally most images you encounter are RESTOR images."}, {"start": 26.88, "end": 32.0, "text": " It is easy to see that the quality of such images greatly depends on the resolution of this"}, {"start": 32.0, "end": 33.0, "text": " grid."}, {"start": 33.0, "end": 37.8, "text": " Of course, the more grid points, the finer the grid is, the more details we can see."}, {"start": 37.8, "end": 43.4, "text": " However, in return, if we disregard compression techniques, the file size grows proportionally"}, {"start": 43.4, "end": 48.879999999999995, "text": " to the number of pixels, and if we zoom in too close, we shall witness these classic staircase"}, {"start": 48.879999999999995, "end": 51.480000000000004, "text": " effects that we like to call aliasing."}, {"start": 51.480000000000004, "end": 56.4, "text": " However, if we are designing a website or a logo for a company which should look sharp"}, {"start": 56.4, "end": 62.16, "text": " on all possible devices and zoom levels, Vector Graphics is a useful alternative."}, {"start": 62.16, "end": 67.03999999999999, "text": " Vector images are inherently different from RESTOR images as the base elements of the image"}, {"start": 67.03999999999999, "end": 70.88, "text": " are not pixels, but vectors and control points."}, {"start": 70.88, "end": 76.12, "text": " The difference is like storing the shape of a circle on a lot of pixels point by point,"}, {"start": 76.12, "end": 81.12, "text": " which would be a RESTOR image, or just saying that I want to circle on these coordinates"}, {"start": 81.12, "end": 82.72, "text": " with a given radius."}, {"start": 82.72, "end": 87.48, "text": " And as you can see in this example, the point of this is to have razor sharp images at"}, {"start": 87.48, "end": 89.56, "text": " higher zoom levels as well."}, {"start": 89.56, "end": 95.24, "text": " Unless we go too crazy with defined details, file sizes are also often remarkably small"}, {"start": 95.24, "end": 100.16, "text": " for vector images because we are not storing the colors of millions of pixels."}, {"start": 100.16, "end": 102.16, "text": " We are only storing shapes."}, {"start": 102.16, "end": 107.8, "text": " If we want to sound a bit more journalistic, we can kind of say that vector images have"}, {"start": 107.8, "end": 109.32, "text": " infinite resolution."}, {"start": 109.32, "end": 115.03999999999999, "text": " We can zoom in as much as we wish and we won't lose any detail during this process."}, {"start": 115.03999999999999, "end": 120.08, "text": " Vectorization is the process where we try to convert a RESTOR image to a vector image."}, {"start": 120.08, "end": 123.55999999999999, "text": " Some also like to call this process image tracing."}, {"start": 123.55999999999999, "end": 128.56, "text": " The immediate question arises, why are we not using vector graphics everywhere?"}, {"start": 128.56, "end": 134.24, "text": " Well, one, the smoother the color transitions and the more detail we have in our images,"}, {"start": 134.24, "end": 137.88, "text": " the quicker the advantage of vectorization evaporates."}, {"start": 137.88, "end": 143.48, "text": " And two, also note that this procedure is not trivial and we are also often at the mercy"}, {"start": 143.48, "end": 147.44, "text": " of the vectorization algorithm in terms of output quality."}, {"start": 147.44, "end": 152.32, "text": " It is often unclear in advance whether it will work well on a given input."}, {"start": 152.32, "end": 157.07999999999998, "text": " So now we know everything we need to know to be able to understand and appreciate this"}, {"start": 157.07999999999998, "end": 158.84, "text": " amazing piece of work."}, {"start": 158.84, "end": 164.6, "text": " The input is a rough sketch that is a RESTOR image and the output is a simplified, cleaned"}, {"start": 164.6, "end": 166.8, "text": " up and vectorized version of it."}, {"start": 166.8, "end": 170.8, "text": " We are not only doing vectorization but simplification as well."}, {"start": 170.8, "end": 175.8, "text": " This is a game changer because this way we can lean on the additional knowledge that these"}, {"start": 175.8, "end": 179.72, "text": " input RESTOR images are sketches, hand drawn images."}, {"start": 179.72, "end": 184.20000000000002, "text": " Therefore there is a lot of extra fluff in them that would be undesirable to retain"}, {"start": 184.20000000000002, "end": 186.04000000000002, "text": " in the vectorized output."}, {"start": 186.04000000000002, "end": 188.72000000000003, "text": " Therefore the name sketch simplification."}, {"start": 188.72000000000003, "end": 193.28, "text": " In each of these cases it is absolute insanity how well it works."}, {"start": 193.28, "end": 195.44, "text": " Just look at these results."}, {"start": 195.44, "end": 199.35999999999999, "text": " The next question is obviously how does this wizardry happen?"}, {"start": 199.35999999999999, "end": 204.32, "text": " It happens by using a classic deep learning technique, a convolutional neural network,"}, {"start": 204.32, "end": 208.72, "text": " of course that was trained on a large number of input and output pairs."}, {"start": 208.72, "end": 212.52, "text": " However this is no ordinary convolutional neural network."}, {"start": 212.52, "end": 217.56, "text": " This particular variant differs from the standard well known architecture as it is augmented"}, {"start": 217.56, "end": 220.92, "text": " with a series of upsampling convolution steps."}, {"start": 220.92, "end": 225.92, "text": " Alternatively this algorithm learns a sparse and concise representation of these input"}, {"start": 225.92, "end": 226.92, "text": " sketches."}, {"start": 226.92, "end": 231.92, "text": " This means that it focuses on the most defining features and throws away all the unneeded"}, {"start": 231.92, "end": 232.92, "text": " fluff."}, {"start": 232.92, "end": 238.07999999999998, "text": " And the upsampling convolution steps make it able to not only understand that synthesize"}, {"start": 238.07999999999998, "end": 243.48, "text": " new, simplified and high resolution images that we can easily vectorize using standard"}, {"start": 243.48, "end": 244.64, "text": " algorithms."}, {"start": 244.64, "end": 248.48, "text": " It is fully automatic and requires no user intervention."}, {"start": 248.48, "end": 252.92, "text": " In case you are scratching your head about these convolutions, we have had plenty of discussions"}, {"start": 252.92, "end": 254.95999999999998, "text": " about this peculiar term before."}, {"start": 254.95999999999998, "end": 258.2, "text": " I have linked the appropriate episodes in the video description box."}, {"start": 258.2, "end": 260.48, "text": " I think you'll find them a lot of fun."}, {"start": 260.48, "end": 265.8, "text": " In one of them I pulled out a guitar and added reverberation to it using convolution."}, {"start": 265.8, "end": 271.2, "text": " It is clear that there is a ton of untapped potential in using different convolution variations"}, {"start": 271.2, "end": 272.8, "text": " in deep neural networks."}, {"start": 272.8, "end": 277.96, "text": " We have seen in a deep mind paper earlier that used dilated convolutions for state of the"}, {"start": 277.96, "end": 283.71999999999997, "text": " art speech synthesis that is a novel convolution variant and this piece of work is no exception"}, {"start": 283.71999999999997, "end": 284.71999999999997, "text": " either."}, {"start": 284.71999999999997, "end": 289.08, "text": " There is also a cool online demo of this technique that anyone can try."}, {"start": 289.08, "end": 291.84, "text": " Make sure to post your results in the comment section."}, {"start": 291.84, "end": 294.0, "text": " We'd love to have a look at your findings."}, {"start": 294.0, "end": 297.24, "text": " Also, have a look at these two-minute papers fan art."}, {"start": 297.24, "end": 300.88, "text": " A nice little logo, one of our kind fellow scholars sent in."}, {"start": 300.88, "end": 304.76, "text": " It's really great to see that you have taken your time to help out the series."}, {"start": 304.76, "end": 306.12, "text": " That's very kind of you."}, {"start": 306.12, "end": 307.12, "text": " Thank you."}, {"start": 307.12, "end": 309.28000000000003, "text": " Thanks for watching and for your generous support."}, {"start": 309.28, "end": 337.79999999999995, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=NnzzSkKKoa8
Human Pose Estimation With Deep Learning | Two Minute Papers #106
The paper "Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image" is available here: http://files.is.tue.mpg.de/black/papers/BogoECCV2016.pdf Welch Labs: Neural Networks Demystified - https://www.youtube.com/playlist?list=PLiaHhY2iBX9hdHaRr6b7XevZtgZRa1PoU Learning to See - https://www.youtube.com/playlist?list=PLiaHhY2iBX9ihLasvE8BKnS2Xg8AhY6iV Our earlier episode on optimization - https://www.youtube.com/watch?v=1ypV5ZiIbdA WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail image background credits: https://pixabay.com/photo-1725207/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. Post-estimation is an interesting area of research where we typically have a few images or video footage of humans and we try to automatically extract the post this person was taking. In short, the input is mostly a 2D image and the output is typically a skeleton of the person. Applications of post-estimation include automatic creation of assets for computer games and digital media, analyzing and coaching the techniques of athletes, or helping computers understand what they see for the betterment of robotics and machine learning techniques. And this is just a taste, the list was by no means exhaustive. Beyond the obvious challenge of trying to reconstruct 3D information from a simple 2D image, the problem is fraught with difficulties as one has to be able to overcome the ambiguity of lighting, occlusions and clothing covering the body. A tough problem, no question about that. An idea technique would do this automatically without any user intervention which sounds like wishful thinking. Or does it? In this paper, a previously proposed convolutional neural network is used to predict the position of the individual joints and curiously, it turns out that we can create a faithful representation of the 3D human body from that by means of optimization. We have had a previous episode on mathematical optimization, you know the drill, the link is available in the video description box. What is remarkable here is that not only the pose, but the body type is also inferred, therefore the output of the process is not just the skeleton, but full 3D geometry. It is coarse geometry, so don't expect a ton of details, but it's 3D geometry more than what most other competing techniques can offer. To ease the computational burden of this problem, in this optimization formulation, healthy constraints are assumed that apply to the human body, such as avoiding unnatural knee and elbow bends and self-intersections. If we use these constraints, the space in which we have to look for possible solutions shrinks considerably. The results show that this algorithm outperforms several other state-of-the-art techniques by a significant margin. It is an auspicious opportunity to preserve and recreate a lot of historic events in digital form, maybe even use them in computer games, and I'm sure that artists will make great use of such techniques. Really well done, the paper is extremely well written, the mathematics and the optimization formulations are beautiful, it was such a joy to read. Regarding the future, I am pretty sure we are soon going to see some pose and skeleton transfer applications via machine learning. The input would be a real-world video with a person doing something and we could essentially edit the video and bend these characters to our will. There are some exploratory works in this area already, the Disney guys for instance are doing quite well. There will be lots of fun to be had indeed. Also, make sure to check out the YouTube channel of Welch Labs, who has a great introductory series for neural networks, which is in my opinion second to none. He also has a new series called Learning to See, where he codes up a machine learning technique for a computer vision application. It is about counting the number of fingers on an image. Really cool, right? The quality of these videos is through the roof, the link for both of these series are available in the description box, make sure to check them out. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.88, "end": 10.16, "text": " Post-estimation is an interesting area of research where we typically have a few images"}, {"start": 10.16, "end": 16.72, "text": " or video footage of humans and we try to automatically extract the post this person was taking."}, {"start": 16.72, "end": 22.48, "text": " In short, the input is mostly a 2D image and the output is typically a skeleton of the"}, {"start": 22.48, "end": 23.48, "text": " person."}, {"start": 23.48, "end": 28.400000000000002, "text": " Applications of post-estimation include automatic creation of assets for computer games and"}, {"start": 28.4, "end": 35.04, "text": " digital media, analyzing and coaching the techniques of athletes, or helping computers understand"}, {"start": 35.04, "end": 39.68, "text": " what they see for the betterment of robotics and machine learning techniques."}, {"start": 39.68, "end": 43.64, "text": " And this is just a taste, the list was by no means exhaustive."}, {"start": 43.64, "end": 48.92, "text": " Beyond the obvious challenge of trying to reconstruct 3D information from a simple 2D"}, {"start": 48.92, "end": 54.84, "text": " image, the problem is fraught with difficulties as one has to be able to overcome the ambiguity"}, {"start": 54.84, "end": 58.84, "text": " of lighting, occlusions and clothing covering the body."}, {"start": 58.84, "end": 62.040000000000006, "text": " A tough problem, no question about that."}, {"start": 62.040000000000006, "end": 67.28, "text": " An idea technique would do this automatically without any user intervention which sounds"}, {"start": 67.28, "end": 69.08, "text": " like wishful thinking."}, {"start": 69.08, "end": 70.08, "text": " Or does it?"}, {"start": 70.08, "end": 75.28, "text": " In this paper, a previously proposed convolutional neural network is used to predict the position"}, {"start": 75.28, "end": 82.36, "text": " of the individual joints and curiously, it turns out that we can create a faithful representation"}, {"start": 82.36, "end": 86.6, "text": " of the 3D human body from that by means of optimization."}, {"start": 86.6, "end": 91.56, "text": " We have had a previous episode on mathematical optimization, you know the drill, the link"}, {"start": 91.56, "end": 94.44, "text": " is available in the video description box."}, {"start": 94.44, "end": 100.16, "text": " What is remarkable here is that not only the pose, but the body type is also inferred,"}, {"start": 100.16, "end": 105.8, "text": " therefore the output of the process is not just the skeleton, but full 3D geometry."}, {"start": 105.8, "end": 111.16, "text": " It is coarse geometry, so don't expect a ton of details, but it's 3D geometry more"}, {"start": 111.16, "end": 114.52, "text": " than what most other competing techniques can offer."}, {"start": 114.52, "end": 119.72, "text": " To ease the computational burden of this problem, in this optimization formulation, healthy"}, {"start": 119.72, "end": 125.39999999999999, "text": " constraints are assumed that apply to the human body, such as avoiding unnatural knee"}, {"start": 125.39999999999999, "end": 128.44, "text": " and elbow bends and self-intersections."}, {"start": 128.44, "end": 133.76, "text": " If we use these constraints, the space in which we have to look for possible solutions shrinks"}, {"start": 133.76, "end": 134.92, "text": " considerably."}, {"start": 134.92, "end": 140.12, "text": " The results show that this algorithm outperforms several other state-of-the-art techniques"}, {"start": 140.12, "end": 142.0, "text": " by a significant margin."}, {"start": 142.0, "end": 147.56, "text": " It is an auspicious opportunity to preserve and recreate a lot of historic events in digital"}, {"start": 147.56, "end": 152.56, "text": " form, maybe even use them in computer games, and I'm sure that artists will make great"}, {"start": 152.56, "end": 154.68, "text": " use of such techniques."}, {"start": 154.68, "end": 159.20000000000002, "text": " Really well done, the paper is extremely well written, the mathematics and the optimization"}, {"start": 159.20000000000002, "end": 163.44, "text": " formulations are beautiful, it was such a joy to read."}, {"start": 163.44, "end": 168.08, "text": " Regarding the future, I am pretty sure we are soon going to see some pose and skeleton"}, {"start": 168.08, "end": 170.84, "text": " transfer applications via machine learning."}, {"start": 170.84, "end": 175.84, "text": " The input would be a real-world video with a person doing something and we could essentially"}, {"start": 175.84, "end": 179.64000000000001, "text": " edit the video and bend these characters to our will."}, {"start": 179.64000000000001, "end": 184.16000000000003, "text": " There are some exploratory works in this area already, the Disney guys for instance are"}, {"start": 184.16000000000003, "end": 185.64000000000001, "text": " doing quite well."}, {"start": 185.64000000000001, "end": 188.28, "text": " There will be lots of fun to be had indeed."}, {"start": 188.28, "end": 193.64000000000001, "text": " Also, make sure to check out the YouTube channel of Welch Labs, who has a great introductory"}, {"start": 193.64, "end": 198.35999999999999, "text": " series for neural networks, which is in my opinion second to none."}, {"start": 198.35999999999999, "end": 202.83999999999997, "text": " He also has a new series called Learning to See, where he codes up a machine learning"}, {"start": 202.83999999999997, "end": 205.23999999999998, "text": " technique for a computer vision application."}, {"start": 205.23999999999998, "end": 208.48, "text": " It is about counting the number of fingers on an image."}, {"start": 208.48, "end": 210.0, "text": " Really cool, right?"}, {"start": 210.0, "end": 214.39999999999998, "text": " The quality of these videos is through the roof, the link for both of these series are"}, {"start": 214.39999999999998, "end": 217.6, "text": " available in the description box, make sure to check them out."}, {"start": 217.6, "end": 224.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QkqNzrsaxYc
Computer Games Empower Deep Learning Research | Two Minute Papers #105
The paper "Playing for Data: Ground Truth from Computer Games" is available here: http://download.visinf.tu-darmstadt.de/data/from_games/ Computer graphics / VR challenge grant at Experiment: https://experiment.com/grants/graphics-and-virtualreality Other popular datasets: - CamVid - http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ - CityScapes - https://www.cityscapes-dataset.com/dataset-overview/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoijona Ifehir. What are datasets? A dataset is typically a big bunch of data, for instance a database of written letters, digits, images of human faces, stock market data that scientists can use to test their algorithms on. If two research groups wish to find out whose algorithm performs better at recognizing traffic signs, they run their techniques on one of these datasets and test their methods on equal footings. For instance, the Camvid dataset stands for Cambridge Driving Labeled Video Database and it offers several hundreds of images depicting a variety of driving scenarios. It is meant to be used to test classification techniques. The input is an image and the question is for each of the pixels which one of them belongs to what class? Classes include roads, vegetation, vehicles, pedestrians, buildings, trees and more. These regions are labeled with all the different colors that you see on these images. To have a usable dataset, we have to label tens of thousands of these images and as you may imagine, creating such labeled images requires a ton of human labor. The first guy has to accurately trace the edges of each of the individual objects seen on every image and there should be a second guy to cross check and make sure everything is in order. That's quite a chore and we haven't even talked about all the other problems that arise from processing footage created with handheld cameras so this takes quite a bit of time and effort with stabilization and calibration as well. So how do we create huge and accurate datasets without investing a remarkable amount of human labor? Well, here out this incredible idea. What if we could record a video of us wandering about in an open world computer game and annotate those images? And this way we can enjoy several advantages. One, since we have recorded continuous videos after annotating the very first image, we will have information from the next frames. Therefore if we do it well, we can propagate the labeling from one image to the next one. That's a huge time saver. Two, in a computer game, one can stage and record animations of important but rare situations that would otherwise be extremely difficult to film. Adding rain or day and night cycles to a set of images is also trivial because we simply can query the game engine to do this for us. Three, not only that, but the algorithm also has some knowledge about the rendering process itself. This means that it looks at how the game communicates with the software drivers and the video card tracks when the geometry and textures for a given type of car are being loaded or discarded and uses this information to further help the label propagation process. And number four, we don't have any of the problems that stem from using handheld cameras. Noise, blurriness, problems with the lens and so on are all non-issues. Using this previous Cambi dataset, the annotation of one image takes about 60 minutes while with this dataset 7 seconds. Thus the authors have published almost 25,000 high quality images and their annotations to aid computer vision and machine learning research in the future. That's a lot of images. But of course, the ultimate question arises. How do we know if these are really high quality training samples? They were only taken from a computer game after all. While the results show that using this dataset we can achieve an equivalent quality of learning compared to the Cambi dataset by using one third as many images. Excellent piece of work, absolutely loving the idea of using video game footage as a surrogate for real world data. Fantastic. And in the meantime, while we are discussing computer graphics, here's a nice computer graphics challenge grant from experiment. Basically, if you start a new research project through their crowdfunded system, you may win additional funding that comes straight from them. Free money. If you are interested in doing any kind of research in this area or if you are a long time practitioner, make sure to have a look. The link is available in the video description box. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karoijona Ifehir."}, {"start": 5.0, "end": 6.0, "text": " What are datasets?"}, {"start": 6.0, "end": 12.36, "text": " A dataset is typically a big bunch of data, for instance a database of written letters,"}, {"start": 12.36, "end": 18.12, "text": " digits, images of human faces, stock market data that scientists can use to test their"}, {"start": 18.12, "end": 19.48, "text": " algorithms on."}, {"start": 19.48, "end": 24.42, "text": " If two research groups wish to find out whose algorithm performs better at recognizing"}, {"start": 24.42, "end": 29.28, "text": " traffic signs, they run their techniques on one of these datasets and test their methods"}, {"start": 29.28, "end": 30.96, "text": " on equal footings."}, {"start": 30.96, "end": 36.68, "text": " For instance, the Camvid dataset stands for Cambridge Driving Labeled Video Database"}, {"start": 36.68, "end": 42.08, "text": " and it offers several hundreds of images depicting a variety of driving scenarios."}, {"start": 42.08, "end": 45.56, "text": " It is meant to be used to test classification techniques."}, {"start": 45.56, "end": 50.760000000000005, "text": " The input is an image and the question is for each of the pixels which one of them belongs"}, {"start": 50.760000000000005, "end": 52.36, "text": " to what class?"}, {"start": 52.36, "end": 58.84, "text": " Classes include roads, vegetation, vehicles, pedestrians, buildings, trees and more."}, {"start": 58.84, "end": 63.120000000000005, "text": " These regions are labeled with all the different colors that you see on these images."}, {"start": 63.120000000000005, "end": 68.52000000000001, "text": " To have a usable dataset, we have to label tens of thousands of these images and as you"}, {"start": 68.52000000000001, "end": 73.72, "text": " may imagine, creating such labeled images requires a ton of human labor."}, {"start": 73.72, "end": 78.52000000000001, "text": " The first guy has to accurately trace the edges of each of the individual objects seen"}, {"start": 78.52000000000001, "end": 83.24000000000001, "text": " on every image and there should be a second guy to cross check and make sure everything"}, {"start": 83.24000000000001, "end": 84.24000000000001, "text": " is in order."}, {"start": 84.24000000000001, "end": 88.60000000000001, "text": " That's quite a chore and we haven't even talked about all the other problems that arise"}, {"start": 88.6, "end": 93.75999999999999, "text": " from processing footage created with handheld cameras so this takes quite a bit of time"}, {"start": 93.75999999999999, "end": 97.19999999999999, "text": " and effort with stabilization and calibration as well."}, {"start": 97.19999999999999, "end": 102.8, "text": " So how do we create huge and accurate datasets without investing a remarkable amount of human"}, {"start": 102.8, "end": 103.8, "text": " labor?"}, {"start": 103.8, "end": 106.72, "text": " Well, here out this incredible idea."}, {"start": 106.72, "end": 112.84, "text": " What if we could record a video of us wandering about in an open world computer game and annotate"}, {"start": 112.84, "end": 114.11999999999999, "text": " those images?"}, {"start": 114.11999999999999, "end": 117.11999999999999, "text": " And this way we can enjoy several advantages."}, {"start": 117.12, "end": 122.80000000000001, "text": " One, since we have recorded continuous videos after annotating the very first image, we"}, {"start": 122.80000000000001, "end": 125.24000000000001, "text": " will have information from the next frames."}, {"start": 125.24000000000001, "end": 130.4, "text": " Therefore if we do it well, we can propagate the labeling from one image to the next one."}, {"start": 130.4, "end": 132.12, "text": " That's a huge time saver."}, {"start": 132.12, "end": 138.56, "text": " Two, in a computer game, one can stage and record animations of important but rare situations"}, {"start": 138.56, "end": 141.84, "text": " that would otherwise be extremely difficult to film."}, {"start": 141.84, "end": 146.84, "text": " Adding rain or day and night cycles to a set of images is also trivial because we simply"}, {"start": 146.84, "end": 149.64000000000001, "text": " can query the game engine to do this for us."}, {"start": 149.64000000000001, "end": 154.88, "text": " Three, not only that, but the algorithm also has some knowledge about the rendering process"}, {"start": 154.88, "end": 155.96, "text": " itself."}, {"start": 155.96, "end": 160.84, "text": " This means that it looks at how the game communicates with the software drivers and the video card"}, {"start": 160.84, "end": 167.0, "text": " tracks when the geometry and textures for a given type of car are being loaded or discarded"}, {"start": 167.0, "end": 171.72, "text": " and uses this information to further help the label propagation process."}, {"start": 171.72, "end": 176.8, "text": " And number four, we don't have any of the problems that stem from using handheld cameras."}, {"start": 176.8, "end": 182.84, "text": " Noise, blurriness, problems with the lens and so on are all non-issues."}, {"start": 182.84, "end": 188.8, "text": " Using this previous Cambi dataset, the annotation of one image takes about 60 minutes while"}, {"start": 188.8, "end": 191.92000000000002, "text": " with this dataset 7 seconds."}, {"start": 191.92000000000002, "end": 197.48000000000002, "text": " Thus the authors have published almost 25,000 high quality images and their annotations"}, {"start": 197.48000000000002, "end": 201.56, "text": " to aid computer vision and machine learning research in the future."}, {"start": 201.56, "end": 203.20000000000002, "text": " That's a lot of images."}, {"start": 203.20000000000002, "end": 206.0, "text": " But of course, the ultimate question arises."}, {"start": 206.0, "end": 209.88, "text": " How do we know if these are really high quality training samples?"}, {"start": 209.88, "end": 212.84, "text": " They were only taken from a computer game after all."}, {"start": 212.84, "end": 218.48, "text": " While the results show that using this dataset we can achieve an equivalent quality of learning"}, {"start": 218.48, "end": 223.96, "text": " compared to the Cambi dataset by using one third as many images."}, {"start": 223.96, "end": 229.56, "text": " Excellent piece of work, absolutely loving the idea of using video game footage as a surrogate"}, {"start": 229.56, "end": 231.24, "text": " for real world data."}, {"start": 231.24, "end": 232.32, "text": " Fantastic."}, {"start": 232.32, "end": 236.64, "text": " And in the meantime, while we are discussing computer graphics, here's a nice computer"}, {"start": 236.64, "end": 239.4, "text": " graphics challenge grant from experiment."}, {"start": 239.4, "end": 243.72, "text": " Basically, if you start a new research project through their crowdfunded system, you may"}, {"start": 243.72, "end": 247.28, "text": " win additional funding that comes straight from them."}, {"start": 247.28, "end": 248.28, "text": " Free money."}, {"start": 248.28, "end": 253.44, "text": " If you are interested in doing any kind of research in this area or if you are a long time practitioner,"}, {"start": 253.44, "end": 254.44, "text": " make sure to have a look."}, {"start": 254.44, "end": 257.15999999999997, "text": " The link is available in the video description box."}, {"start": 257.16, "end": 264.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=sWZQxB2es88
Building a Community Around Two Minute Papers
The Two Minute Papers Data project: https://www.reddit.com/r/twominutepapers/comments/58qa8p/github_repository_for_video_data/ https://www.reddit.com/r/twominutepapers/comments/5a6jes/suggestions_and_help_on_twominutepapersdata/ A nice writeup about the Starcraft 2 panel az Blizzcon: https://www.reddit.com/r/starcraft/comments/5bb6y0/notes_from_the_ai_panel/ Recommended for you: StyLit, Illumination-Guided Artistic Style Transfer - https://www.youtube.com/watch?v=ksCSL6Ql0Yg Real-Time Shading With Area Light Sources - https://www.youtube.com/watch?v=SC0D7aJOySY WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credits: https://flic.kr/p/J5Ys9N Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kalo Ejolna Ifehir. This is a quick update on our plans with Two Minute Papers. The more I look at our incredibly high quality comments sections on YouTube or the community efforts that started out on our subreddit, the more I get the feeling that we should strive to create a community platform around the series. A platform for people who wish to learn, exchange ideas, collaborate and help each other. For instance, The Two Minute Papers Data Project has recently started. The initial idea is that there would be a public crowdsource database with episode-related metadata that anyone can add information to. This would be useful for several reasons. We can, for instance, include implementation links for each of the papers for the ones that have source code either from the authors or some independent implementations from the community. We can also include keywords on the topic of the episode, such as machine learning, recurrent neural networks, class simulations for an easy way to search for a topic of interest. If we are interested in any of these, we can just easily search for a keyword and immediately know what episode it was referenced in. The very same could be done with some of the technical words used in the series, such as metropolis sampling, backpropagation and overfitting. This way, we could maybe build an intuitive technical dictionary with definitions and links pointing to the appropriate episodes where they were explained. I think that would be the ultimate learning resource and having such a dictionary would be super useful. I was playing with the thought of having this for quite a while now. So this Data Project has just started out on The Two Minute Papers subreddit, but it can only happen with the help of the community. And this means you. If you are interested in helping, please drop by and leave us a note. I've put the link in the description box. Looking forward to seeing you there. It seems that there is interest in doing such collaborations and experiments together between you fellow scholars. Imagine how cool it would be to see students and researchers forming an open and helpful community brainstorming, running the implementations and sharing their ideas and findings with each other. Who knows, maybe one day a science paper will be written as a result of such endeavors. Please let us know in the comments section what you think about these things. Would you participate in such an endeavor? As always, we love reading your awesome feedback. Also in the meantime, a huge thank you for the following fellow scholars who translated our episodes to several languages. If you wish to contribute to, click the cogwheel button in any video on the lower right, click subtitles slash cc, then add subtitles slash cc. I am trying my very best to credit every contributor. If I have forgotten anyone that's not intended, please let me know and I'll fix it. Meanwhile on Patreon, we are currently oscillating around our current milestone. Reaching this one means that all of our software and hardware costs are covered, which is quite amazing. Because creating YouTube videos with high information density and short duration is the very definition of financial suicide, it's very helpful for us to have such a safety net with Patreon. Also, there was an episode not so long ago about the no-two-minute papers machine which was bought solely from your support and I'm still stunned by this. I tried to explain this to close members of the family that we were able to buy this new machine with the support of complete strangers from the internet whom I've never met. They said that this is clearly impossible. Apparently, it's not impossible and so many kind people are watching the series. Really, thank you so much. When we reach the milestone after that, we will be able to spend 1% of these funds to directly help other research projects and conferences. Please remember that your support makes two-minute papers possible and you can cancel these pledges at any time. Also, if you don't feel like using Patreon or don't have any disposable income that is completely fine. I'd like to emphasize that none of this is required, it's just an option to help and we completely understand that it's not easy to make ends meet for many of you, even with lots of overtime at a tiring and difficult job. Two-minute papers is always going to be here and will always be free for everyone. We would like to spread the word so even more of us can marvel at the wonders of research. Just watching the series and sharing these episodes is also a great deal of help and we are super grateful for it. It is really incredible to see how the series has grown and how many of you fellow scholars are interested in research and the inventions of the future. Let's continue our journey of science together. In the meantime, Google DeepMind and Blizzard has announced a joint effort to make it possible for computer programs to play Starcraft 2, a famous real-time strategy game from the start of 2017. Whoa! There was a question about the first thing I will do when this project takes off. Well of course, I take one week of vacation and write an AI that can play against itself and watch with tremendous enjoyment as they beat the living hell out of each other and other real players. If I heard it correctly, in one of the upcoming championships, computer algorithms will also be allowed to play and who knows maybe defeat the reigning player champions, you know, like in chess and go. We are living amazing times indeed, I am counting the days, super excited. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.96, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kalo Ejolna Ifehir."}, {"start": 4.96, "end": 8.700000000000001, "text": " This is a quick update on our plans with Two Minute Papers."}, {"start": 8.700000000000001, "end": 13.44, "text": " The more I look at our incredibly high quality comments sections on YouTube or the community"}, {"start": 13.44, "end": 18.080000000000002, "text": " efforts that started out on our subreddit, the more I get the feeling that we should strive"}, {"start": 18.080000000000002, "end": 20.84, "text": " to create a community platform around the series."}, {"start": 20.84, "end": 26.8, "text": " A platform for people who wish to learn, exchange ideas, collaborate and help each other."}, {"start": 26.8, "end": 30.84, "text": " For instance, The Two Minute Papers Data Project has recently started."}, {"start": 30.84, "end": 36.08, "text": " The initial idea is that there would be a public crowdsource database with episode-related"}, {"start": 36.08, "end": 39.36, "text": " metadata that anyone can add information to."}, {"start": 39.36, "end": 41.72, "text": " This would be useful for several reasons."}, {"start": 41.72, "end": 46.6, "text": " We can, for instance, include implementation links for each of the papers for the ones"}, {"start": 46.6, "end": 51.24, "text": " that have source code either from the authors or some independent implementations from"}, {"start": 51.24, "end": 52.24, "text": " the community."}, {"start": 52.24, "end": 57.32, "text": " We can also include keywords on the topic of the episode, such as machine learning, recurrent"}, {"start": 57.32, "end": 62.760000000000005, "text": " neural networks, class simulations for an easy way to search for a topic of interest."}, {"start": 62.760000000000005, "end": 67.68, "text": " If we are interested in any of these, we can just easily search for a keyword and immediately"}, {"start": 67.68, "end": 70.2, "text": " know what episode it was referenced in."}, {"start": 70.2, "end": 75.24000000000001, "text": " The very same could be done with some of the technical words used in the series, such as"}, {"start": 75.24000000000001, "end": 79.44, "text": " metropolis sampling, backpropagation and overfitting."}, {"start": 79.44, "end": 84.88, "text": " This way, we could maybe build an intuitive technical dictionary with definitions and links"}, {"start": 84.88, "end": 87.96, "text": " pointing to the appropriate episodes where they were explained."}, {"start": 87.96, "end": 92.4, "text": " I think that would be the ultimate learning resource and having such a dictionary would"}, {"start": 92.4, "end": 93.64, "text": " be super useful."}, {"start": 93.64, "end": 97.08, "text": " I was playing with the thought of having this for quite a while now."}, {"start": 97.08, "end": 101.4, "text": " So this Data Project has just started out on The Two Minute Papers subreddit, but it"}, {"start": 101.4, "end": 104.52, "text": " can only happen with the help of the community."}, {"start": 104.52, "end": 105.75999999999999, "text": " And this means you."}, {"start": 105.76, "end": 109.28, "text": " If you are interested in helping, please drop by and leave us a note."}, {"start": 109.28, "end": 111.48, "text": " I've put the link in the description box."}, {"start": 111.48, "end": 113.28, "text": " Looking forward to seeing you there."}, {"start": 113.28, "end": 118.0, "text": " It seems that there is interest in doing such collaborations and experiments together"}, {"start": 118.0, "end": 120.08000000000001, "text": " between you fellow scholars."}, {"start": 120.08000000000001, "end": 125.2, "text": " Imagine how cool it would be to see students and researchers forming an open and helpful"}, {"start": 125.2, "end": 130.48000000000002, "text": " community brainstorming, running the implementations and sharing their ideas and findings with"}, {"start": 130.48000000000002, "end": 131.48000000000002, "text": " each other."}, {"start": 131.48, "end": 137.44, "text": " Who knows, maybe one day a science paper will be written as a result of such endeavors."}, {"start": 137.44, "end": 140.79999999999998, "text": " Please let us know in the comments section what you think about these things."}, {"start": 140.79999999999998, "end": 143.0, "text": " Would you participate in such an endeavor?"}, {"start": 143.0, "end": 146.0, "text": " As always, we love reading your awesome feedback."}, {"start": 146.0, "end": 150.23999999999998, "text": " Also in the meantime, a huge thank you for the following fellow scholars who translated"}, {"start": 150.23999999999998, "end": 152.6, "text": " our episodes to several languages."}, {"start": 152.6, "end": 158.07999999999998, "text": " If you wish to contribute to, click the cogwheel button in any video on the lower right, click"}, {"start": 158.08, "end": 163.12, "text": " subtitles slash cc, then add subtitles slash cc."}, {"start": 163.12, "end": 166.32000000000002, "text": " I am trying my very best to credit every contributor."}, {"start": 166.32000000000002, "end": 171.16000000000003, "text": " If I have forgotten anyone that's not intended, please let me know and I'll fix it."}, {"start": 171.16000000000003, "end": 176.44, "text": " Meanwhile on Patreon, we are currently oscillating around our current milestone."}, {"start": 176.44, "end": 181.28, "text": " Reaching this one means that all of our software and hardware costs are covered, which is"}, {"start": 181.28, "end": 182.60000000000002, "text": " quite amazing."}, {"start": 182.60000000000002, "end": 187.28, "text": " Because creating YouTube videos with high information density and short duration is the"}, {"start": 187.28, "end": 192.36, "text": " very definition of financial suicide, it's very helpful for us to have such a safety net"}, {"start": 192.36, "end": 193.36, "text": " with Patreon."}, {"start": 193.36, "end": 197.96, "text": " Also, there was an episode not so long ago about the no-two-minute papers machine which"}, {"start": 197.96, "end": 202.4, "text": " was bought solely from your support and I'm still stunned by this."}, {"start": 202.4, "end": 206.64, "text": " I tried to explain this to close members of the family that we were able to buy this new"}, {"start": 206.64, "end": 212.12, "text": " machine with the support of complete strangers from the internet whom I've never met."}, {"start": 212.12, "end": 214.84, "text": " They said that this is clearly impossible."}, {"start": 214.84, "end": 220.04, "text": " Apparently, it's not impossible and so many kind people are watching the series."}, {"start": 220.04, "end": 222.32, "text": " Really, thank you so much."}, {"start": 222.32, "end": 227.6, "text": " When we reach the milestone after that, we will be able to spend 1% of these funds to"}, {"start": 227.6, "end": 231.04, "text": " directly help other research projects and conferences."}, {"start": 231.04, "end": 235.64000000000001, "text": " Please remember that your support makes two-minute papers possible and you can cancel these"}, {"start": 235.64000000000001, "end": 237.44, "text": " pledges at any time."}, {"start": 237.44, "end": 242.32, "text": " Also, if you don't feel like using Patreon or don't have any disposable income that is"}, {"start": 242.32, "end": 243.84, "text": " completely fine."}, {"start": 243.84, "end": 248.36, "text": " I'd like to emphasize that none of this is required, it's just an option to help and"}, {"start": 248.36, "end": 253.2, "text": " we completely understand that it's not easy to make ends meet for many of you, even with"}, {"start": 253.2, "end": 256.64, "text": " lots of overtime at a tiring and difficult job."}, {"start": 256.64, "end": 261.6, "text": " Two-minute papers is always going to be here and will always be free for everyone."}, {"start": 261.6, "end": 266.84000000000003, "text": " We would like to spread the word so even more of us can marvel at the wonders of research."}, {"start": 266.84000000000003, "end": 271.16, "text": " Just watching the series and sharing these episodes is also a great deal of help and"}, {"start": 271.16, "end": 272.84000000000003, "text": " we are super grateful for it."}, {"start": 272.84, "end": 278.32, "text": " It is really incredible to see how the series has grown and how many of you fellow scholars"}, {"start": 278.32, "end": 281.64, "text": " are interested in research and the inventions of the future."}, {"start": 281.64, "end": 284.15999999999997, "text": " Let's continue our journey of science together."}, {"start": 284.15999999999997, "end": 289.88, "text": " In the meantime, Google DeepMind and Blizzard has announced a joint effort to make it possible"}, {"start": 289.88, "end": 295.76, "text": " for computer programs to play Starcraft 2, a famous real-time strategy game from the start"}, {"start": 295.76, "end": 297.08, "text": " of 2017."}, {"start": 297.08, "end": 298.88, "text": " Whoa!"}, {"start": 298.88, "end": 303.12, "text": " There was a question about the first thing I will do when this project takes off."}, {"start": 303.12, "end": 308.88, "text": " Well of course, I take one week of vacation and write an AI that can play against itself"}, {"start": 308.88, "end": 314.04, "text": " and watch with tremendous enjoyment as they beat the living hell out of each other and"}, {"start": 314.04, "end": 315.64, "text": " other real players."}, {"start": 315.64, "end": 319.88, "text": " If I heard it correctly, in one of the upcoming championships, computer algorithms will"}, {"start": 319.88, "end": 325.56, "text": " also be allowed to play and who knows maybe defeat the reigning player champions, you know,"}, {"start": 325.56, "end": 327.48, "text": " like in chess and go."}, {"start": 327.48, "end": 332.56, "text": " We are living amazing times indeed, I am counting the days, super excited."}, {"start": 332.56, "end": 356.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=u9kvJbWb_1U
How To Steal a Lost Election With Gerrymandering | Two Minute Papers #104
Gerrymandering, is the process of manipulating electoral district boundaries to turn the tide of an election. There are efficient mathematical techniques to optimally solve such a problem and we'll see how the different electoral districts in the US evolved over the last 20 years as a result of gerrymandering. ___________________ An article on the evolution of a district over 20 years: https://www.theguardian.com/us-news/2016/oct/19/gerrymandering-supreme-court-us-election-north-carolina Recommended for you: Metropolis Light Transport - https://www.youtube.com/watch?v=f0Uzit_-h3M Automatic Parameter Control for Metropolis Light Transport - https://www.youtube.com/watch?v=9wOBkJJ-w2s The redistricting game seen in the video: http://www.redistrictinggame.org/game/launchgame.php WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image credits: https://pixabay.com/photo-1594962/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejone Fahir. Let's talk about the mathematical intricacies of the elections. Here you can see the shape of the 12th congressional district in North Carolina in the 90s. This is not a naturally shaped electoral district, is it? One might say this is more of an abomination. If you try to understand why it has this peculiar shape, we shall find a remarkable mathematical mischief. Have a look at this example of 50 electoral precincts. The distribution is 60% blue and 40% red. So this means that the blue party should win the elections and gain seeds with the ratio of 60 to 40. Right? Well, this is not exactly how it works. There's a majority decision, district by district, regardless of the vote ratios. If the electoral districts are shaped like this, then the blue party wins 5 to 0. However, if they are shaped like this, the red party wins 3 to 2, which is kind of mind-blowing because the votes are the very same, and this is known as the wasted vote effect. This term doesn't refer to someone who enters the voting booth intoxicated. This means that one can think of pretty much every vote beyond 50% plus 1 to a party in a district to be irrelevant. It doesn't matter if the district is won by 99% of the votes or just 50% plus 1 vote. So the counting plan is now laid out. What if instead we could regroup all of these extra votes to win in a different district where we were losing? And now we have ceremoniously arrived to the definition of gerrymandering, which is the process of manipulating electoral district boundaries to turn the tide of an election. The term originates from one of the elections in the USA in the 1800s, where Governor Albridge Gerry signed a bill to reshape the districts of Massachusetts in order to favor his party. And at that time, understandably, all the papers and comic artists were up in arms about this bill. So how does one perform gerrymandering? Gerrymandering is actually a mathematical problem of the purest form where we are trying to maximize the number of seats that we can win by manipulating the district boundaries appropriately. It is important to note that the entire process relies on a relatively faithful prediction of the vote distributions per region, which in many countries is not really changing all that much in time. This is a problem that we can solve via standard optimization techniques. Now hold on to your papers and get this. For instance, we can use metropolis sampling to solve this problem, which is absolutely stunning. So far, in an earlier episode, we have used metropolis sampling to develop a super-efficient light simulation program to create beautiful images of virtual scenes, and the very same technique can also be used to steal an election. In fact, metropolis sampling was developed and used during the Manhattan Project, where the first atomic bomb was created in Los Alamos. I think it is completely understandable that the power of mathematics and research still give many of us sleepless nights, sometimes delightful, sometimes perilous. It is also important to note that in order to retain the fairness of elections in a district-based system, it is of utmost importance that these district boundaries are drawn by independent organizations and that the process is as transparent as possible. I decided not to cite a concrete paper in this episode. If you would like to read up on this topic, I recommend searching for keywords like redistricting and gerrymandering on Google's scholar. Please feel free to post the more interesting findings of yours in the comments section, we always have excellent discussions therein. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejone Fahir."}, {"start": 4.5600000000000005, "end": 8.44, "text": " Let's talk about the mathematical intricacies of the elections."}, {"start": 8.44, "end": 14.56, "text": " Here you can see the shape of the 12th congressional district in North Carolina in the 90s."}, {"start": 14.56, "end": 18.0, "text": " This is not a naturally shaped electoral district, is it?"}, {"start": 18.0, "end": 20.76, "text": " One might say this is more of an abomination."}, {"start": 20.76, "end": 27.76, "text": " If you try to understand why it has this peculiar shape, we shall find a remarkable mathematical mischief."}, {"start": 27.76, "end": 31.360000000000003, "text": " Have a look at this example of 50 electoral precincts."}, {"start": 31.360000000000003, "end": 35.44, "text": " The distribution is 60% blue and 40% red."}, {"start": 35.44, "end": 42.32, "text": " So this means that the blue party should win the elections and gain seeds with the ratio of 60 to 40."}, {"start": 42.32, "end": 43.160000000000004, "text": " Right?"}, {"start": 43.160000000000004, "end": 45.92, "text": " Well, this is not exactly how it works."}, {"start": 45.92, "end": 50.8, "text": " There's a majority decision, district by district, regardless of the vote ratios."}, {"start": 50.8, "end": 57.120000000000005, "text": " If the electoral districts are shaped like this, then the blue party wins 5 to 0."}, {"start": 57.12, "end": 64.08, "text": " However, if they are shaped like this, the red party wins 3 to 2, which is kind of mind-blowing"}, {"start": 64.08, "end": 69.12, "text": " because the votes are the very same, and this is known as the wasted vote effect."}, {"start": 69.12, "end": 74.4, "text": " This term doesn't refer to someone who enters the voting booth intoxicated."}, {"start": 74.4, "end": 82.72, "text": " This means that one can think of pretty much every vote beyond 50% plus 1 to a party in a district to be irrelevant."}, {"start": 82.72, "end": 89.52, "text": " It doesn't matter if the district is won by 99% of the votes or just 50% plus 1 vote."}, {"start": 89.52, "end": 92.16, "text": " So the counting plan is now laid out."}, {"start": 92.16, "end": 100.08, "text": " What if instead we could regroup all of these extra votes to win in a different district where we were losing?"}, {"start": 100.08, "end": 104.64, "text": " And now we have ceremoniously arrived to the definition of gerrymandering,"}, {"start": 104.64, "end": 110.64, "text": " which is the process of manipulating electoral district boundaries to turn the tide of an election."}, {"start": 110.64, "end": 115.2, "text": " The term originates from one of the elections in the USA in the 1800s,"}, {"start": 115.2, "end": 121.28, "text": " where Governor Albridge Gerry signed a bill to reshape the districts of Massachusetts in order to"}, {"start": 121.28, "end": 127.6, "text": " favor his party. And at that time, understandably, all the papers and comic artists were up in arms"}, {"start": 127.6, "end": 134.4, "text": " about this bill. So how does one perform gerrymandering? Gerrymandering is actually a mathematical problem"}, {"start": 134.4, "end": 139.44, "text": " of the purest form where we are trying to maximize the number of seats that we can win"}, {"start": 139.44, "end": 145.44, "text": " by manipulating the district boundaries appropriately. It is important to note that the entire process"}, {"start": 145.44, "end": 150.07999999999998, "text": " relies on a relatively faithful prediction of the vote distributions per region,"}, {"start": 150.07999999999998, "end": 154.16, "text": " which in many countries is not really changing all that much in time."}, {"start": 154.16, "end": 158.48, "text": " This is a problem that we can solve via standard optimization techniques."}, {"start": 158.48, "end": 161.52, "text": " Now hold on to your papers and get this."}, {"start": 161.52, "end": 167.68, "text": " For instance, we can use metropolis sampling to solve this problem, which is absolutely stunning."}, {"start": 167.68, "end": 173.36, "text": " So far, in an earlier episode, we have used metropolis sampling to develop a super-efficient"}, {"start": 173.36, "end": 179.84, "text": " light simulation program to create beautiful images of virtual scenes, and the very same technique"}, {"start": 179.84, "end": 185.68, "text": " can also be used to steal an election. In fact, metropolis sampling was developed and used"}, {"start": 185.68, "end": 190.72, "text": " during the Manhattan Project, where the first atomic bomb was created in Los Alamos."}, {"start": 190.72, "end": 196.0, "text": " I think it is completely understandable that the power of mathematics and research still give"}, {"start": 196.0, "end": 201.36, "text": " many of us sleepless nights, sometimes delightful, sometimes perilous."}, {"start": 201.36, "end": 206.8, "text": " It is also important to note that in order to retain the fairness of elections in a district-based"}, {"start": 206.8, "end": 212.32, "text": " system, it is of utmost importance that these district boundaries are drawn by independent"}, {"start": 212.32, "end": 216.4, "text": " organizations and that the process is as transparent as possible."}, {"start": 216.4, "end": 221.76, "text": " I decided not to cite a concrete paper in this episode. If you would like to read up on this topic,"}, {"start": 221.76, "end": 227.51999999999998, "text": " I recommend searching for keywords like redistricting and gerrymandering on Google's scholar."}, {"start": 227.51999999999998, "end": 231.92, "text": " Please feel free to post the more interesting findings of yours in the comments section,"}, {"start": 231.92, "end": 236.95999999999998, "text": " we always have excellent discussions therein. Thanks for watching and for your generous support,"}, {"start": 236.96, "end": 255.76000000000002, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fl-7e8yBUic
Real-Time Soft Body Dynamics for Video Games | Two Minute Papers #103
We have had plenty of episodes about fluid simulations, so how about some tasty soft body dynamics for today? Soft body dynamics basically means computing what happens when we smash together different deformable objects. Examples include folding sheets, playing around with noodles, or torturing armadillos. I think this is a nice and representative showcase of the immense joys of computer graphics research! Clarification: the 15 ms per frame execution time is a nice ballpark number, but it depends on the scene. ____________________ The paper "Vivace: a Practical Gauss-Seidel Method for Stable Soft Body Dynamics" is available here: http://pellacini.di.uniroma1.it/publications/vivace16/vivace16.html WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Image credits: Thumbnail background: https://pixabay.com/photo-1747663/ Graph coloring: https://commons.wikimedia.org/wiki/File:Complete_coloring_clebsch_graph.svg Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karajjona Ifehir. We have had plenty of episodes about fluid simulations, so how about some tasty soft body dynamics for today? Soft body dynamics basically means computing what happens when we smash together different deformable objects. Examples include folding sheets, playing around with noodles, or torturing armadillos. I think this is a nice and representative showcase of the immense joys of computer graphics research. The key to real-time physically-based simulations is parallelism. Parallelism means that we have many of the same units working together in harmony. Imagine if we had to assign 50 people to work together to make a coffee in the same kitchen. As you may imagine, they would trip over each other and the result would be chaos, not productivity. Such a process would not scale favorably because as we would add more people, after around three or four, the productivity would not increase but drop significantly. You can often hear a similar example of nine pregnant women not being able to give birth to a baby in one month. For better scaling, we have to subdivide a bigger task into small tasks in a way that these people can work independently. The more independently they can work, the better the productivity will scale as we add more people. In software engineering, these virtual people would like to call threads or compute units. As of 2016, mid-tier processors are equipped with four to eight logical cores and for a video card, we typically have compute units in the order of hundreds. So if we wish to develop efficient algorithms, we have to make sure that these big simulation tasks are subdivided in a way so that these threads are not tripping over each other. And the big contribution of this piece of work is a technique to distribute the computation tasks to these compute units in a way that they are working on independent chunks of the problem. This is achieved via using graph coloring, which is a technique typically used for designing seating plans, exam timetabling, solving sudoku puzzles, and similar assignment tasks. It not only works in an absolutely spectacular manner, but graph theory is an immensely beautiful subfield of mathematics, so additional style points to the authors. The technique produces remarkably realistic animations and requires only 15 milliseconds per frame, which means that this technique can render over 60 frames per second comfortably. And the other most important factor is that this technique is also stable, meaning that it offers an appropriate solution even when many other techniques fail to deliver. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karajjona Ifehir."}, {"start": 4.32, "end": 11.36, "text": " We have had plenty of episodes about fluid simulations, so how about some tasty soft body dynamics for today?"}, {"start": 11.36, "end": 18.8, "text": " Soft body dynamics basically means computing what happens when we smash together different deformable objects."}, {"start": 18.8, "end": 24.96, "text": " Examples include folding sheets, playing around with noodles, or torturing armadillos."}, {"start": 24.96, "end": 31.200000000000003, "text": " I think this is a nice and representative showcase of the immense joys of computer graphics research."}, {"start": 31.200000000000003, "end": 35.52, "text": " The key to real-time physically-based simulations is parallelism."}, {"start": 35.52, "end": 40.88, "text": " Parallelism means that we have many of the same units working together in harmony."}, {"start": 40.88, "end": 47.44, "text": " Imagine if we had to assign 50 people to work together to make a coffee in the same kitchen."}, {"start": 47.44, "end": 53.44, "text": " As you may imagine, they would trip over each other and the result would be chaos, not productivity."}, {"start": 53.44, "end": 58.239999999999995, "text": " Such a process would not scale favorably because as we would add more people,"}, {"start": 58.239999999999995, "end": 64.0, "text": " after around three or four, the productivity would not increase but drop significantly."}, {"start": 64.0, "end": 71.12, "text": " You can often hear a similar example of nine pregnant women not being able to give birth to a baby in one month."}, {"start": 71.12, "end": 79.44, "text": " For better scaling, we have to subdivide a bigger task into small tasks in a way that these people can work independently."}, {"start": 79.44, "end": 84.96, "text": " The more independently they can work, the better the productivity will scale as we add more people."}, {"start": 84.96, "end": 90.56, "text": " In software engineering, these virtual people would like to call threads or compute units."}, {"start": 90.56, "end": 96.88, "text": " As of 2016, mid-tier processors are equipped with four to eight logical cores"}, {"start": 96.88, "end": 101.92, "text": " and for a video card, we typically have compute units in the order of hundreds."}, {"start": 101.92, "end": 104.64, "text": " So if we wish to develop efficient algorithms,"}, {"start": 104.64, "end": 110.48, "text": " we have to make sure that these big simulation tasks are subdivided in a way so that these threads"}, {"start": 110.48, "end": 112.4, "text": " are not tripping over each other."}, {"start": 112.4, "end": 117.76, "text": " And the big contribution of this piece of work is a technique to distribute the computation tasks"}, {"start": 117.76, "end": 123.6, "text": " to these compute units in a way that they are working on independent chunks of the problem."}, {"start": 123.6, "end": 130.08, "text": " This is achieved via using graph coloring, which is a technique typically used for designing seating plans,"}, {"start": 130.08, "end": 135.60000000000002, "text": " exam timetabling, solving sudoku puzzles, and similar assignment tasks."}, {"start": 135.60000000000002, "end": 139.20000000000002, "text": " It not only works in an absolutely spectacular manner,"}, {"start": 139.20000000000002, "end": 143.12, "text": " but graph theory is an immensely beautiful subfield of mathematics,"}, {"start": 143.12, "end": 145.76000000000002, "text": " so additional style points to the authors."}, {"start": 145.76000000000002, "end": 152.08, "text": " The technique produces remarkably realistic animations and requires only 15 milliseconds per frame,"}, {"start": 152.08, "end": 157.12, "text": " which means that this technique can render over 60 frames per second comfortably."}, {"start": 157.12, "end": 161.20000000000002, "text": " And the other most important factor is that this technique is also stable,"}, {"start": 161.20000000000002, "end": 166.88, "text": " meaning that it offers an appropriate solution even when many other techniques fail to deliver."}, {"start": 166.88, "end": 191.04, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=nK3giIsNAHg
Generating Tangle Patterns With Grammars | Two Minute Papers #102
A tangle pattern is a beautiful, intervowen tapestry of basic stroke patterns, like dots, straight lines, and simple curves. If we look at some of these works, we see that many of these are highly structured, and maybe, we could automatically create such beautiful structures with a computer. ______________ The paper "gTangle: a Grammar for the Procedural Generation of Tangle Patterns" is available here: http://pellacini.di.uniroma1.it/publications/gtangle16/gtangle16.html The paper "Layer-Based Procedural Design of Facades" is available here: https://www.cg.tuwien.ac.at/research/publications/2015/Ilcik_2015_LAY/ https://vimeo.com/118400233 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Image credits: Wikipedia: https://en.wikipedia.org/wiki/Context-free_grammar The thumbnail background image was taken from the corresponding paper (link above). Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejone Fahir. A Tangle pattern is a beautiful, interwoven tapestry of basic stroke patterns, like dots, straight lines and simple curves. If we look at some of these works, we see that many of these are highly structured, and maybe we could automatically create such beautiful structures with a computer. And now, hold on to your papers, because this piece of work is about generating Tangle patterns with grammars. Okay, now, stop right there. How on earth do grammars have anything to do with computer graphics or Tangle patterns? The idea of this sounds as outlandish as it gets. Grammars are a set of rules that tell us how to build up a structure, such as a sentence properly, from small elements, like nouns, adjectives, pronouns, and so on. Matheners also study grammars extensively and set up rules that enforce that every mathematical expression satisfies a number of desirable constraints. It's not a surprise that when mathematicians talk about grammars, they will use these mathematical higher-roglyphs, like the ones you see on the screen. It is a beautiful subfield of mathematics that I have studied myself before and I'm still hooked. Given the fact that from grammars, we can build not only sentences, but buildings. For instance, a shape grammar for buildings can describe rules like a wall can contain several windows. Below a window goes a window sill. One wall may have at most two doors attached and so on. My friend, Martin Ilchik, is working on defining such shape grammars for buildings. And using these grammars, he can generate a huge amount of different skyscrapers, facades, and all kinds of cool buildings. In this piece of work, we start out with an input shape, subdivided into multiple other shapes, assigned these smaller shapes into groups, and the final tangle is obtained by choosing patterns and assigning them to all of these groups. This yields a very expressive, powerful tool that anyone can use to create beautiful tangle patterns. And all this through the power of grammars. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejone Fahir."}, {"start": 4.8, "end": 11.44, "text": " A Tangle pattern is a beautiful, interwoven tapestry of basic stroke patterns, like dots,"}, {"start": 11.44, "end": 14.0, "text": " straight lines and simple curves."}, {"start": 14.0, "end": 18.48, "text": " If we look at some of these works, we see that many of these are highly structured, and"}, {"start": 18.48, "end": 23.56, "text": " maybe we could automatically create such beautiful structures with a computer."}, {"start": 23.56, "end": 28.560000000000002, "text": " And now, hold on to your papers, because this piece of work is about generating Tangle"}, {"start": 28.56, "end": 30.799999999999997, "text": " patterns with grammars."}, {"start": 30.799999999999997, "end": 33.44, "text": " Okay, now, stop right there."}, {"start": 33.44, "end": 39.48, "text": " How on earth do grammars have anything to do with computer graphics or Tangle patterns?"}, {"start": 39.48, "end": 43.68, "text": " The idea of this sounds as outlandish as it gets."}, {"start": 43.68, "end": 48.84, "text": " Grammars are a set of rules that tell us how to build up a structure, such as a sentence"}, {"start": 48.84, "end": 54.64, "text": " properly, from small elements, like nouns, adjectives, pronouns, and so on."}, {"start": 54.64, "end": 60.92, "text": " Matheners also study grammars extensively and set up rules that enforce that every mathematical"}, {"start": 60.92, "end": 64.96000000000001, "text": " expression satisfies a number of desirable constraints."}, {"start": 64.96000000000001, "end": 70.16, "text": " It's not a surprise that when mathematicians talk about grammars, they will use these mathematical"}, {"start": 70.16, "end": 73.16, "text": " higher-roglyphs, like the ones you see on the screen."}, {"start": 73.16, "end": 78.52, "text": " It is a beautiful subfield of mathematics that I have studied myself before and I'm still"}, {"start": 78.52, "end": 79.52, "text": " hooked."}, {"start": 79.52, "end": 85.08, "text": " Given the fact that from grammars, we can build not only sentences, but buildings."}, {"start": 85.08, "end": 90.67999999999999, "text": " For instance, a shape grammar for buildings can describe rules like a wall can contain"}, {"start": 90.67999999999999, "end": 92.19999999999999, "text": " several windows."}, {"start": 92.19999999999999, "end": 95.39999999999999, "text": " Below a window goes a window sill."}, {"start": 95.39999999999999, "end": 99.75999999999999, "text": " One wall may have at most two doors attached and so on."}, {"start": 99.75999999999999, "end": 105.32, "text": " My friend, Martin Ilchik, is working on defining such shape grammars for buildings."}, {"start": 105.32, "end": 111.11999999999999, "text": " And using these grammars, he can generate a huge amount of different skyscrapers, facades,"}, {"start": 111.11999999999999, "end": 113.39999999999999, "text": " and all kinds of cool buildings."}, {"start": 113.39999999999999, "end": 118.28, "text": " In this piece of work, we start out with an input shape, subdivided into multiple other"}, {"start": 118.28, "end": 125.16, "text": " shapes, assigned these smaller shapes into groups, and the final tangle is obtained by choosing"}, {"start": 125.16, "end": 128.6, "text": " patterns and assigning them to all of these groups."}, {"start": 128.6, "end": 133.56, "text": " This yields a very expressive, powerful tool that anyone can use to create beautiful"}, {"start": 133.56, "end": 140.8, "text": " tangle patterns."}, {"start": 140.8, "end": 144.16, "text": " And all this through the power of grammars."}, {"start": 144.16, "end": 167.07999999999998, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=w2D5JR83pFI
3D Printing Materials With Subsurface Scattering | Two Minute Papers #98
Better Explained tutorials: https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/ https://betterexplained.com/cheatsheet/ Today, our main question is whether we can reproduce the effect of subsurface scattering with 3D printed materials. The input would be a real material, and the output would be an arbitrary shaped 3d printed material with similar scattering properties. Something that looks similar. ___________________________ The paper "Physical Reproduction of Materials with Specified Subsurface Scattering" is available here: http://people.csail.mit.edu/wojciech/PRO/index.html Recommended for you: Separable Subsurface Scattering (more on diffusion profiles therein) - https://www.youtube.com/watch?v=72_iAlYwl0c WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Image credits: Thumbnail background image: Cássia Afini - https://flic.kr/p/6mPh2m Ear and skin subsurface scattering images: Wikipedia Plant leaf: Dan Markeye - https://flic.kr/p/fGie2L Marble: Koen Beets and Erik Hubo - http://research.edm.uhasselt.be/thaber/subsurface.php Marble dragon: Rui Wang - http://www.cs.virginia.edu/~rw2p/cs647/project.htm Orange: PieDog Media - https://flic.kr/p/83CZc1 Voxel Lambo: Philippe Put - https://flic.kr/p/E4WLJQ Voxel Castle: post-apocalyptic research institute - https://flic.kr/p/b9xtJa Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Subsurface scattering means that not every ray of light is reflected on the surface of the material, but some of it may get inside somewhere and come out somewhere else. For instance, our skin is a great and fairly unknown example of that. We can witness this beautiful effect if we place a strong light source behind our ears. Note that many other materials, such as plant leaves, many fruits, such as apples and oranges, wax, marble also have subsurface scattering. The more we look at objects like these, the more we recognize how beautiful and ubiquitous subsurface scattering and translucency is in modern nature. And today, our main question is whether we can reproduce this kind of effect with 3D-printed materials. The input would be a real material, such as these slabs, and the output would be an orbit-very-shaped 3D-printed material with similar scattering properties. Something that looks similar. What you see here is already the result of the 3D-printing process and wow, they look very tasty indeed. The process starts with a measurement apparatus where we grab a real material and create a diffusion profile from it that describes how light scatters inside of this material. We have talked quite a bit about diffusion profiles before, I've put some links to earlier episodes in the video description box. If you check it out, you'll see how we can add subsurface scattering to an already existing image by kind of multiplying it with another image. This is one of those amazing inventions of mankind. Now, onto 3D-printing. When we would like to 3D-print something, we basically have a few different materials to work with and we have to specify a shape. This shape is approximated with a three-dimensional grid. Each of these tiny grid elements typically have the thickness of several microns, which basically means a tiny fraction of the diameter of one hair strand. And we like to call these elements voxels. Now, before printing, we have to specify what kind of material we'd like to fill each of these voxels with. This is a general workflow for most 3D-printers. What is specific to this work is that after that we have to take one column of this material and look at the scattering properties of it. Let's call this column one stacking. We could measure that stacking by hand and see how it relates to the original target material and we are trying to minimize the difference between the two. However, it would take millions of tries and would likely take a lifetime to print just one high-quality reproduction. So basically, we have an optimization problem where we are looking for a stacking that will appear similar to the chosen diffusion profiles. The difference between the appearance of the two is to be minimized. However, we have to realize that in physics, the laws of light scattering are well understood and the wonderful thing is that instead of printing a real object, we could just use a light simulation program to tell us how close the result should be. Now, this would work great but it would still take an eternity because simulating light scattering through a stack of materials would take the very least several seconds. And we have to try up to millions of stackings for each column and there is a lot of columns to compute. Why a lot of different columns? Well, it's because we have a heterogeneous problem which means that the whole material can contain variations in color and scattering properties. The geometry may also be uneven, so this is a vastly more difficult formulation of the initial problem. A classical light simulation program would be able to solve this while in a matter of years. However, there is a wonderful tool that is able to almost immediately tell us how much light is scattering inside of a stack of a ton of different materials. An almost instant multi-layer scattering tool, if you will. It really is a miracle that we can get the results for something so quickly that would otherwise require following the paths of millions of light rays. We call this technique the Hankel transform. The mathematical description of it is absolutely beautiful, but I personally think the best way of motivating these techniques is through application, like this one. Imagine that many mathematicians have to study this transform without ever hearing what it can be used for. These are not some dry, antideus materials that one has to memorize. We can do miracles with easy inventions and I feel that people need to know about that. With the use of the Hankel transform and some additional optimizations, one can efficiently find solutions that lead to high-quality reproductions of the input material. Excellent piece of work. Definitely one of my favorites in 3D fabrication. As always, we'd love to read your feedback on this episode. Let us know whether you have found it understandable. I hope you did. Also, a quick shout out to betterexplained.com. Please note that this is not a sponsored message. It has multiple slogans such as math lessons for lasting insight or math without endless memorization. This web page is run by Khalid Azad and contains tons of intuitive math lessons I wish I had access to during my years at the university. For instance, here is his guide on Fourier transforms, which is a staple technique in every mathematicians and engineer's skill set and is a prerequisite to understanding the Hankel transform. If you wish to learn mathematics, definitely check this website out and if you don't wish to learn mathematics, then also definitely check this website out. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.0, "end": 11.0, "text": " Subsurface scattering means that not every ray of light is reflected on the surface of the material,"}, {"start": 11.0, "end": 15.0, "text": " but some of it may get inside somewhere and come out somewhere else."}, {"start": 15.0, "end": 19.0, "text": " For instance, our skin is a great and fairly unknown example of that."}, {"start": 19.0, "end": 24.0, "text": " We can witness this beautiful effect if we place a strong light source behind our ears."}, {"start": 24.0, "end": 31.0, "text": " Note that many other materials, such as plant leaves, many fruits, such as apples and oranges,"}, {"start": 31.0, "end": 35.0, "text": " wax, marble also have subsurface scattering."}, {"start": 35.0, "end": 44.0, "text": " The more we look at objects like these, the more we recognize how beautiful and ubiquitous subsurface scattering and translucency is in modern nature."}, {"start": 44.0, "end": 51.0, "text": " And today, our main question is whether we can reproduce this kind of effect with 3D-printed materials."}, {"start": 51.0, "end": 61.0, "text": " The input would be a real material, such as these slabs, and the output would be an orbit-very-shaped 3D-printed material with similar scattering properties."}, {"start": 61.0, "end": 63.0, "text": " Something that looks similar."}, {"start": 63.0, "end": 70.0, "text": " What you see here is already the result of the 3D-printing process and wow, they look very tasty indeed."}, {"start": 70.0, "end": 84.0, "text": " The process starts with a measurement apparatus where we grab a real material and create a diffusion profile from it that describes how light scatters inside of this material."}, {"start": 84.0, "end": 91.0, "text": " We have talked quite a bit about diffusion profiles before, I've put some links to earlier episodes in the video description box."}, {"start": 91.0, "end": 100.0, "text": " If you check it out, you'll see how we can add subsurface scattering to an already existing image by kind of multiplying it with another image."}, {"start": 100.0, "end": 103.0, "text": " This is one of those amazing inventions of mankind."}, {"start": 103.0, "end": 106.0, "text": " Now, onto 3D-printing."}, {"start": 106.0, "end": 115.0, "text": " When we would like to 3D-print something, we basically have a few different materials to work with and we have to specify a shape."}, {"start": 115.0, "end": 118.0, "text": " This shape is approximated with a three-dimensional grid."}, {"start": 118.0, "end": 128.0, "text": " Each of these tiny grid elements typically have the thickness of several microns, which basically means a tiny fraction of the diameter of one hair strand."}, {"start": 128.0, "end": 130.0, "text": " And we like to call these elements voxels."}, {"start": 130.0, "end": 137.0, "text": " Now, before printing, we have to specify what kind of material we'd like to fill each of these voxels with."}, {"start": 137.0, "end": 141.0, "text": " This is a general workflow for most 3D-printers."}, {"start": 141.0, "end": 149.0, "text": " What is specific to this work is that after that we have to take one column of this material and look at the scattering properties of it."}, {"start": 149.0, "end": 151.0, "text": " Let's call this column one stacking."}, {"start": 151.0, "end": 160.0, "text": " We could measure that stacking by hand and see how it relates to the original target material and we are trying to minimize the difference between the two."}, {"start": 160.0, "end": 168.0, "text": " However, it would take millions of tries and would likely take a lifetime to print just one high-quality reproduction."}, {"start": 168.0, "end": 176.0, "text": " So basically, we have an optimization problem where we are looking for a stacking that will appear similar to the chosen diffusion profiles."}, {"start": 176.0, "end": 180.0, "text": " The difference between the appearance of the two is to be minimized."}, {"start": 180.0, "end": 195.0, "text": " However, we have to realize that in physics, the laws of light scattering are well understood and the wonderful thing is that instead of printing a real object, we could just use a light simulation program to tell us how close the result should be."}, {"start": 195.0, "end": 205.0, "text": " Now, this would work great but it would still take an eternity because simulating light scattering through a stack of materials would take the very least several seconds."}, {"start": 205.0, "end": 213.0, "text": " And we have to try up to millions of stackings for each column and there is a lot of columns to compute. Why a lot of different columns?"}, {"start": 213.0, "end": 222.0, "text": " Well, it's because we have a heterogeneous problem which means that the whole material can contain variations in color and scattering properties."}, {"start": 222.0, "end": 228.0, "text": " The geometry may also be uneven, so this is a vastly more difficult formulation of the initial problem."}, {"start": 228.0, "end": 234.0, "text": " A classical light simulation program would be able to solve this while in a matter of years."}, {"start": 234.0, "end": 245.0, "text": " However, there is a wonderful tool that is able to almost immediately tell us how much light is scattering inside of a stack of a ton of different materials."}, {"start": 245.0, "end": 259.0, "text": " An almost instant multi-layer scattering tool, if you will. It really is a miracle that we can get the results for something so quickly that would otherwise require following the paths of millions of light rays."}, {"start": 259.0, "end": 272.0, "text": " We call this technique the Hankel transform. The mathematical description of it is absolutely beautiful, but I personally think the best way of motivating these techniques is through application, like this one."}, {"start": 272.0, "end": 278.0, "text": " Imagine that many mathematicians have to study this transform without ever hearing what it can be used for."}, {"start": 278.0, "end": 287.0, "text": " These are not some dry, antideus materials that one has to memorize. We can do miracles with easy inventions and I feel that people need to know about that."}, {"start": 287.0, "end": 297.0, "text": " With the use of the Hankel transform and some additional optimizations, one can efficiently find solutions that lead to high-quality reproductions of the input material."}, {"start": 297.0, "end": 302.0, "text": " Excellent piece of work. Definitely one of my favorites in 3D fabrication."}, {"start": 302.0, "end": 308.0, "text": " As always, we'd love to read your feedback on this episode. Let us know whether you have found it understandable."}, {"start": 308.0, "end": 323.0, "text": " I hope you did. Also, a quick shout out to betterexplained.com. Please note that this is not a sponsored message. It has multiple slogans such as math lessons for lasting insight or math without endless memorization."}, {"start": 323.0, "end": 332.0, "text": " This web page is run by Khalid Azad and contains tons of intuitive math lessons I wish I had access to during my years at the university."}, {"start": 332.0, "end": 344.0, "text": " For instance, here is his guide on Fourier transforms, which is a staple technique in every mathematicians and engineer's skill set and is a prerequisite to understanding the Hankel transform."}, {"start": 344.0, "end": 352.0, "text": " If you wish to learn mathematics, definitely check this website out and if you don't wish to learn mathematics, then also definitely check this website out."}, {"start": 352.0, "end": 356.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=kwqme8mEgz4
Sound Synthesis for Fluids With Bubbles | Two Minute Papers #97
In this work, the authors created a simulator, that shows us not only the motion of a piece of fluid, but the physics of bubbles within as well. This sounds great, but there are two huge problems: one, there are a lot of them, and two, they can undergo all kinds of deformations and topology changes. ________________________ The paper "Toward Animating Water with Complex Acoustic Bubbles" is available here: http://www.cs.cornell.edu/projects/Sound/bubbles/ Recommended for you: All previous episodes on fluid simulations (and more!) - https://www.youtube.com/playlist?list=PLujxSBD-JXgnnd16wIjedAcvfQcLw0IJI WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The source of the thumbnail background image: https://pixabay.com/photo-83758/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. We have had quite a few episodes on simulating the motion of fluids and creating beautiful footages from the results. In this work, the authors created a simulator that shows us not only the motion of a piece of fluid, but the physics of bubbles within as well. This sounds great, but there are two huge problems. One, there are a lot of them. And two, they can undergo all kinds of deformations and topology changes. To conjure up video footage that is realistic and relates to the real world, several bubble related effects such as entrainment, splitting, merging, advection, and collapsing all have to be simulated faithfully. However, there is a large body of research out there to simulate bubbles. And here, we are not only interested in the footage of this piece of fluid, but also what kind of sounds it would emit when we interact with it. The result is something like this. The vibrations of a bubble is simulated by borrowing the equations that govern the movement of springs in physics. However, this, by itself, would only be a four-learn attempt at creating a faithful sound simulation as there are other important factors to take into consideration. For instance, the position of the bubble matters a great deal. This example shows that the pitch of the sound is expected to be lower near solid walls. As you can see it marked with dark blue on the left, right side, and below, and have a higher pitch near the surface, which is marked with red. You can also see that there are significant differences in the frequencies depending on the position, the highest frequency being twice as high as the lowest. So this is definitely an important part of the simulation. Furthermore, taking into consideration the shape of the bubbles is also of utmost importance. As the shape of the bubble goes from an ellipsoid to something close to a sphere, the emitted sound frequency can drop by as much as 30%. Beyond these effects, there were still blind spots, even in state of the art simulations. With previous techniques, a chirp-like sound was missing, which is now possible to simulate with a novel frequency extension model. Additional extensions include a technique that models the phenomenon of the bubbles popping at the surface. The paper discusses what cases are likely to emphasize which of these extensions effects. Taking it all together, it sounds magnificent. The But still, however great these sounds are, without proper validation, these are still just numbers on a paper. And of course, as always, the best way of testing these kinds of works if we let reality be our judge and compare the results to real world footage. So I think you can guess what the next test is going to be about. Our frequency model matches experiments well, such as this entrained bubble and this bubble released from an underwater tube. The authors also put up a clinic on physics and math and the entirety of the paper is absolutely beautifully written. I definitely recommend having a look, as always, the link is available in the description box. Also, you'll find one more link to a playlist with all of our previous episodes on fluid simulations. Lots of goodies there. As always, we'd love to read your feedback on this episode, let us know whether you have found it understandable. I hope you did. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.84, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir."}, {"start": 4.84, "end": 10.28, "text": " We have had quite a few episodes on simulating the motion of fluids and creating beautiful"}, {"start": 10.28, "end": 12.16, "text": " footages from the results."}, {"start": 12.16, "end": 17.44, "text": " In this work, the authors created a simulator that shows us not only the motion of a piece"}, {"start": 17.44, "end": 20.96, "text": " of fluid, but the physics of bubbles within as well."}, {"start": 20.96, "end": 24.16, "text": " This sounds great, but there are two huge problems."}, {"start": 24.16, "end": 26.28, "text": " One, there are a lot of them."}, {"start": 26.28, "end": 31.48, "text": " And two, they can undergo all kinds of deformations and topology changes."}, {"start": 31.48, "end": 36.84, "text": " To conjure up video footage that is realistic and relates to the real world, several bubble"}, {"start": 36.84, "end": 44.400000000000006, "text": " related effects such as entrainment, splitting, merging, advection, and collapsing all have"}, {"start": 44.400000000000006, "end": 46.24, "text": " to be simulated faithfully."}, {"start": 46.24, "end": 50.96, "text": " However, there is a large body of research out there to simulate bubbles."}, {"start": 50.96, "end": 55.68000000000001, "text": " And here, we are not only interested in the footage of this piece of fluid, but also"}, {"start": 55.68, "end": 59.84, "text": " what kind of sounds it would emit when we interact with it."}, {"start": 59.84, "end": 89.4, "text": " The result is something like this."}, {"start": 89.4, "end": 99.96000000000001, "text": " The vibrations of a bubble is simulated by borrowing the equations that govern the movement of"}, {"start": 99.96000000000001, "end": 101.76, "text": " springs in physics."}, {"start": 101.76, "end": 108.60000000000001, "text": " However, this, by itself, would only be a four-learn attempt at creating a faithful sound simulation"}, {"start": 108.60000000000001, "end": 112.4, "text": " as there are other important factors to take into consideration."}, {"start": 112.4, "end": 116.28, "text": " For instance, the position of the bubble matters a great deal."}, {"start": 116.28, "end": 122.24, "text": " This example shows that the pitch of the sound is expected to be lower near solid walls."}, {"start": 122.24, "end": 127.96000000000001, "text": " As you can see it marked with dark blue on the left, right side, and below, and have a higher"}, {"start": 127.96000000000001, "end": 131.64, "text": " pitch near the surface, which is marked with red."}, {"start": 131.64, "end": 136.04, "text": " You can also see that there are significant differences in the frequencies depending on"}, {"start": 136.04, "end": 140.6, "text": " the position, the highest frequency being twice as high as the lowest."}, {"start": 140.6, "end": 143.56, "text": " So this is definitely an important part of the simulation."}, {"start": 143.56, "end": 149.92000000000002, "text": " Furthermore, taking into consideration the shape of the bubbles is also of utmost importance."}, {"start": 149.92000000000002, "end": 155.36, "text": " As the shape of the bubble goes from an ellipsoid to something close to a sphere, the emitted"}, {"start": 155.36, "end": 160.04, "text": " sound frequency can drop by as much as 30%."}, {"start": 160.04, "end": 165.64000000000001, "text": " Beyond these effects, there were still blind spots, even in state of the art simulations."}, {"start": 165.64000000000001, "end": 170.64000000000001, "text": " With previous techniques, a chirp-like sound was missing, which is now possible to simulate"}, {"start": 170.64, "end": 174.04, "text": " with a novel frequency extension model."}, {"start": 174.04, "end": 179.48, "text": " Additional extensions include a technique that models the phenomenon of the bubbles popping"}, {"start": 179.48, "end": 180.72, "text": " at the surface."}, {"start": 180.72, "end": 186.27999999999997, "text": " The paper discusses what cases are likely to emphasize which of these extensions effects."}, {"start": 186.28, "end": 204.12, "text": " Taking it all together, it sounds magnificent."}, {"start": 204.12, "end": 220.42000000000002, "text": " The"}, {"start": 220.42, "end": 250.38, "text": " But still, however great these sounds are, without proper validation, these are still"}, {"start": 250.38, "end": 256.54, "text": " just numbers on a paper. And of course, as always, the best way of testing these kinds of works"}, {"start": 256.54, "end": 263.14, "text": " if we let reality be our judge and compare the results to real world footage. So I think you can guess"}, {"start": 263.14, "end": 269.26, "text": " what the next test is going to be about. Our frequency model matches experiments well, such as this"}, {"start": 269.26, "end": 280.94, "text": " entrained bubble and this bubble released from an underwater tube. The authors also put up a clinic"}, {"start": 280.94, "end": 286.94, "text": " on physics and math and the entirety of the paper is absolutely beautifully written. I definitely"}, {"start": 286.94, "end": 292.86, "text": " recommend having a look, as always, the link is available in the description box. Also, you'll find"}, {"start": 292.86, "end": 298.94, "text": " one more link to a playlist with all of our previous episodes on fluid simulations. Lots of goodies"}, {"start": 298.94, "end": 304.06, "text": " there. As always, we'd love to read your feedback on this episode, let us know whether you have found"}, {"start": 304.06, "end": 330.46, "text": " it understandable. I hope you did. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=5-xMV3sT3Tw
3D Printing Auxetic Materials | Two Minute Papers #96
In this episode, we shall talk about auxetic materials. Auxetic materials are materials that when stretched, thicken perpendicular to the direction we're stretching them. In other words, instead of thinning, they get fatter when stretched. _____________________________________ The paper "Beyond Developable: Computational Design and Fabrication with Auxetic Materials" is available here: http://lgg.epfl.ch/publications/2016/BeyondDevelopable/index.php The tendon paper, "Negative Poisson’s ratios in tendons: An unexpected mechanical response" is available here: http://www.sciencedirect.com/science/article/pii/S1742706115002871 Our previous episode about optimization is available here: https://www.youtube.com/watch?v=1ypV5ZiIbdA WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was taken from the corresponding paper listed above. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Kato Ejolene Fehir. We are back, and in this episode we shall talk about oxidic materials. Oxetic materials are materials that when stretched, thicken, perpendicular to the direction we are stretching them. In other words, instead of thinning, they get fatter when stretched. Really bogus the mind, right? They are excellent at energy absorption and resisting fracture, and are therefore widely used in body armor design and have read a research paper stating that even our tendons also show oxidic behavior. These oxidic patterns can be cut out from a number of different materials, and are also used in footwear design and actuated electronic materials. However, all of these applications are restricted to rather limited shapes. Furthermore, even the simplest objects like this sphere cannot be always approximated by inextensible materials. However, if we remove parts of this surface in a smart way, this inextensible material becomes oxidic and can approximate not only these rudimentary objects, but much more complicated shapes as well. However, achieving this is not trivial. If we try the simplest possible solution, which would basically be shoving the material onto a human head like a paper bag, but as it is aptly demonstrated in these images, it would be a fruitless endeavor. This method tries to solve this problem by flattening the target surface with an operation that mathematicians like to call a conformal mapping. For instance, the world map in our geography textbooks is also a very astutely designed conformal mapping from a geoid object, the Earth, which can be shown on a sheet of paper. However, this mapping has to make sense so that the information seen on this sheet of paper actually makes sense in the original 3D domain as well. This is not trivial to do. And after this mapping, our question is where the individual points would have to be located so that they satisfy three conditions. One, the resulting shape has to approximate the target shape, for instance, the human head as faithfully as possible. Two, the construction has to be rigid. And three, when we stretch the material, the triangle cuts have to make sense and not intersect each other, so huge chasms and degenerate shapes are to be avoided. This work is using optimization to obtain a formidable solution that satisfies these constraints. If you remember our earlier episode about optimization, I said there will be a ton of examples of that in the series. This is one fine example of that. And the results are absolutely amazing. The possibility of creating a much richer set of oxenic material designs is now within the realm of possibility. And I expect that it will have applications from designing microscopic materials to designing better footwear or leather garments. And we are definitely just scratching the surface. The methods support copper, aluminum, plastic and leather designs. And I am sure there will be mind-blowing applications that we cannot even fathom so early in the process. As an additional selling point, the materials are also reconfigurable, meaning that from the same piece of material, we can create a number of different shapes. Even non-trivial shapes with holes such as a torus can be created. Note that in mathematics, the torus is basically a fancy name for a donut. A truly fantastic piece of work definitely have a look at the paper. It has a lot of topological calculations, which is an awesome subfield of mathematics. And the author's presentation of the video is excellent. Make sure to have a look at that. Let me know if you have found this episode understandable. We always get a lot of awesome feedback and we love reading your comments. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two minute papers with Kato Ejolene Fehir."}, {"start": 5.0, "end": 10.540000000000001, "text": " We are back, and in this episode we shall talk about oxidic materials."}, {"start": 10.540000000000001, "end": 15.84, "text": " Oxetic materials are materials that when stretched, thicken, perpendicular to the direction"}, {"start": 15.84, "end": 16.96, "text": " we are stretching them."}, {"start": 16.96, "end": 21.44, "text": " In other words, instead of thinning, they get fatter when stretched."}, {"start": 21.44, "end": 23.52, "text": " Really bogus the mind, right?"}, {"start": 23.52, "end": 28.92, "text": " They are excellent at energy absorption and resisting fracture, and are therefore widely"}, {"start": 28.92, "end": 34.6, "text": " used in body armor design and have read a research paper stating that even our tendons"}, {"start": 34.6, "end": 37.24, "text": " also show oxidic behavior."}, {"start": 37.24, "end": 42.0, "text": " These oxidic patterns can be cut out from a number of different materials, and are also"}, {"start": 42.0, "end": 46.24, "text": " used in footwear design and actuated electronic materials."}, {"start": 46.24, "end": 50.92, "text": " However, all of these applications are restricted to rather limited shapes."}, {"start": 50.92, "end": 56.120000000000005, "text": " Furthermore, even the simplest objects like this sphere cannot be always approximated"}, {"start": 56.120000000000005, "end": 58.040000000000006, "text": " by inextensible materials."}, {"start": 58.04, "end": 64.12, "text": " However, if we remove parts of this surface in a smart way, this inextensible material"}, {"start": 64.12, "end": 69.6, "text": " becomes oxidic and can approximate not only these rudimentary objects, but much more"}, {"start": 69.6, "end": 71.6, "text": " complicated shapes as well."}, {"start": 71.6, "end": 74.44, "text": " However, achieving this is not trivial."}, {"start": 74.44, "end": 79.4, "text": " If we try the simplest possible solution, which would basically be shoving the material"}, {"start": 79.4, "end": 84.88, "text": " onto a human head like a paper bag, but as it is aptly demonstrated in these images,"}, {"start": 84.88, "end": 86.75999999999999, "text": " it would be a fruitless endeavor."}, {"start": 86.76, "end": 92.52000000000001, "text": " This method tries to solve this problem by flattening the target surface with an operation"}, {"start": 92.52000000000001, "end": 96.44, "text": " that mathematicians like to call a conformal mapping."}, {"start": 96.44, "end": 101.92, "text": " For instance, the world map in our geography textbooks is also a very astutely designed"}, {"start": 101.92, "end": 108.36000000000001, "text": " conformal mapping from a geoid object, the Earth, which can be shown on a sheet of paper."}, {"start": 108.36000000000001, "end": 113.16000000000001, "text": " However, this mapping has to make sense so that the information seen on this sheet of"}, {"start": 113.16, "end": 117.88, "text": " paper actually makes sense in the original 3D domain as well."}, {"start": 117.88, "end": 119.67999999999999, "text": " This is not trivial to do."}, {"start": 119.67999999999999, "end": 125.24, "text": " And after this mapping, our question is where the individual points would have to be located"}, {"start": 125.24, "end": 128.32, "text": " so that they satisfy three conditions."}, {"start": 128.32, "end": 134.12, "text": " One, the resulting shape has to approximate the target shape, for instance, the human head"}, {"start": 134.12, "end": 136.35999999999999, "text": " as faithfully as possible."}, {"start": 136.35999999999999, "end": 140.35999999999999, "text": " Two, the construction has to be rigid."}, {"start": 140.36, "end": 146.32000000000002, "text": " And three, when we stretch the material, the triangle cuts have to make sense and not"}, {"start": 146.32000000000002, "end": 152.32000000000002, "text": " intersect each other, so huge chasms and degenerate shapes are to be avoided."}, {"start": 152.32000000000002, "end": 158.0, "text": " This work is using optimization to obtain a formidable solution that satisfies these"}, {"start": 158.0, "end": 159.0, "text": " constraints."}, {"start": 159.0, "end": 163.92000000000002, "text": " If you remember our earlier episode about optimization, I said there will be a ton of"}, {"start": 163.92000000000002, "end": 166.08, "text": " examples of that in the series."}, {"start": 166.08, "end": 168.56, "text": " This is one fine example of that."}, {"start": 168.56, "end": 171.52, "text": " And the results are absolutely amazing."}, {"start": 171.52, "end": 177.64000000000001, "text": " The possibility of creating a much richer set of oxenic material designs is now within"}, {"start": 177.64000000000001, "end": 179.4, "text": " the realm of possibility."}, {"start": 179.4, "end": 184.8, "text": " And I expect that it will have applications from designing microscopic materials to designing"}, {"start": 184.8, "end": 187.44, "text": " better footwear or leather garments."}, {"start": 187.44, "end": 190.48000000000002, "text": " And we are definitely just scratching the surface."}, {"start": 190.48000000000002, "end": 195.96, "text": " The methods support copper, aluminum, plastic and leather designs."}, {"start": 195.96, "end": 201.6, "text": " And I am sure there will be mind-blowing applications that we cannot even fathom so early in the"}, {"start": 201.6, "end": 202.88, "text": " process."}, {"start": 202.88, "end": 207.92000000000002, "text": " As an additional selling point, the materials are also reconfigurable, meaning that from"}, {"start": 207.92000000000002, "end": 212.88, "text": " the same piece of material, we can create a number of different shapes."}, {"start": 212.88, "end": 217.56, "text": " Even non-trivial shapes with holes such as a torus can be created."}, {"start": 217.56, "end": 222.44, "text": " Note that in mathematics, the torus is basically a fancy name for a donut."}, {"start": 222.44, "end": 226.52, "text": " A truly fantastic piece of work definitely have a look at the paper."}, {"start": 226.52, "end": 231.96, "text": " It has a lot of topological calculations, which is an awesome subfield of mathematics."}, {"start": 231.96, "end": 235.24, "text": " And the author's presentation of the video is excellent."}, {"start": 235.24, "end": 236.84, "text": " Make sure to have a look at that."}, {"start": 236.84, "end": 239.64, "text": " Let me know if you have found this episode understandable."}, {"start": 239.64, "end": 243.68, "text": " We always get a lot of awesome feedback and we love reading your comments."}, {"start": 243.68, "end": 246.16, "text": " Thanks for watching and for your generous support."}, {"start": 246.16, "end": 264.84, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZaAUFqcfDJg
Patreon Update - New Machine!
Our Patreon page is available here: https://www.patreon.com/TwoMinutePapers The new PC configuration is the following: Motherboard: Asrock H170A-X1 CPU: Intel Core i5-6600 3.3GHz LGA1151 BOX RAM: 16GB 2400MHz Kingston DDRIV HyperX Fury Black Kit RAM HX424C15FB VGA: Gigabyte PCI-E NVIDIA GTX1060 WF2 OC (6144MB, DDR5, 192bit, 1556/800) SSD: Samsung 250GB SSD 2,5" SATA3 MZ-75E250B 850 EVO Case: Sharkoon VG5-W PSU: Chieftec 700W GPS-700-A8 _______________ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was created by Dana Mattocks - https://flic.kr/p/66dtrw Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. I apologize for the delays during last week. I always put out notifications about such events on Twitter and Facebook. Make sure to follow us there so you fellow scholars know about these well in advance. And I have to say, during this time, I really miss you and making videos so much. This video is a quick update on what has happened since, and I'd also like to assure you that the next Two Minute Papers episode is already in the works and is going to arrive soon. Very soon. Patreon is a platform where you can support your favorite creators with monthly recurring tips and get cool perks in return. We have quite a few supporters who are really passionate about the show, and during the making of the last few episodes, we have encountered severe hardware issues. There were freezes, random restarts, blue screens of death constantly, and the computer was just too unstable to record and added Two Minute Papers. The technicians checked it out and found that quite a few parts would have to be replaced, and I figured that this would be a great time to replace this old configuration. So, we have ordered the new Two Minute Papers rig, and the entire purchase happened with the help of you, our Patreon supporters. We were able to replace it effortlessly, which is just amazing. Users fail me to describe how grateful I am for your generous support, and I am still stunned by this. It is just unfathomable to me that I am just sitting here in a room with a microphone, having way too much fun with research papers, and many of you enjoy this series enough to support it, and it had come to this. You fellow scholars make Two Minute Papers happen. Thanks so much. I tried to squeeze in a bit of footage of the new rig, and transparency above all will post the configuration in the video description box for the more curious minds out there. What I can say at this point is that this new rig renders videos three times as quickly as the previous one, and even though these are just short videos, the rendering times for something in full HD and 60 frames per second are surprisingly long, even when run on the graphical card. Well, not anymore. So one more time, thank you so much for supporting the series. This is absolutely amazing. It really is. This was a quick update video. The next episode is coming soon, and I'll try my very best so we can be back at our regular schedule of two videos per week. Lots of spectacular works are on our list. Stay tuned. Oh, and by the way, we have fellow scholars watching from all around the world, and in the meantime, some of them have started translating our episodes to German, Portuguese, Spanish, and Italian. For some reason, I cannot see the names of the kind people who took their time to contribute. So I'd like to kindly thank you for your work. It makes two minute papers accessible to even more people, which is amazing. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 7.48, "text": " I apologize for the delays during last week."}, {"start": 7.48, "end": 12.16, "text": " I always put out notifications about such events on Twitter and Facebook."}, {"start": 12.16, "end": 16.92, "text": " Make sure to follow us there so you fellow scholars know about these well in advance."}, {"start": 16.92, "end": 21.84, "text": " And I have to say, during this time, I really miss you and making videos so much."}, {"start": 21.84, "end": 26.92, "text": " This video is a quick update on what has happened since, and I'd also like to assure you that"}, {"start": 26.92, "end": 32.28, "text": " the next Two Minute Papers episode is already in the works and is going to arrive soon."}, {"start": 32.28, "end": 33.64, "text": " Very soon."}, {"start": 33.64, "end": 38.84, "text": " Patreon is a platform where you can support your favorite creators with monthly recurring"}, {"start": 38.84, "end": 41.400000000000006, "text": " tips and get cool perks in return."}, {"start": 41.400000000000006, "end": 45.480000000000004, "text": " We have quite a few supporters who are really passionate about the show, and during the"}, {"start": 45.480000000000004, "end": 50.52, "text": " making of the last few episodes, we have encountered severe hardware issues."}, {"start": 50.52, "end": 55.96, "text": " There were freezes, random restarts, blue screens of death constantly, and the computer"}, {"start": 55.96, "end": 60.08, "text": " was just too unstable to record and added Two Minute Papers."}, {"start": 60.08, "end": 65.12, "text": " The technicians checked it out and found that quite a few parts would have to be replaced,"}, {"start": 65.12, "end": 69.92, "text": " and I figured that this would be a great time to replace this old configuration."}, {"start": 69.92, "end": 75.36, "text": " So, we have ordered the new Two Minute Papers rig, and the entire purchase happened with"}, {"start": 75.36, "end": 78.52, "text": " the help of you, our Patreon supporters."}, {"start": 78.52, "end": 83.0, "text": " We were able to replace it effortlessly, which is just amazing."}, {"start": 83.0, "end": 87.84, "text": " Users fail me to describe how grateful I am for your generous support, and I am still"}, {"start": 87.84, "end": 89.36, "text": " stunned by this."}, {"start": 89.36, "end": 95.04, "text": " It is just unfathomable to me that I am just sitting here in a room with a microphone,"}, {"start": 95.04, "end": 99.84, "text": " having way too much fun with research papers, and many of you enjoy this series enough to"}, {"start": 99.84, "end": 102.68, "text": " support it, and it had come to this."}, {"start": 102.68, "end": 105.4, "text": " You fellow scholars make Two Minute Papers happen."}, {"start": 105.4, "end": 106.92, "text": " Thanks so much."}, {"start": 106.92, "end": 111.52, "text": " I tried to squeeze in a bit of footage of the new rig, and transparency above all will"}, {"start": 111.52, "end": 116.6, "text": " post the configuration in the video description box for the more curious minds out there."}, {"start": 116.6, "end": 121.64, "text": " What I can say at this point is that this new rig renders videos three times as quickly"}, {"start": 121.64, "end": 126.75999999999999, "text": " as the previous one, and even though these are just short videos, the rendering times for"}, {"start": 126.75999999999999, "end": 132.64, "text": " something in full HD and 60 frames per second are surprisingly long, even when run on"}, {"start": 132.64, "end": 134.12, "text": " the graphical card."}, {"start": 134.12, "end": 136.04, "text": " Well, not anymore."}, {"start": 136.04, "end": 140.0, "text": " So one more time, thank you so much for supporting the series."}, {"start": 140.0, "end": 142.0, "text": " This is absolutely amazing."}, {"start": 142.0, "end": 143.44, "text": " It really is."}, {"start": 143.44, "end": 145.24, "text": " This was a quick update video."}, {"start": 145.24, "end": 150.28, "text": " The next episode is coming soon, and I'll try my very best so we can be back at our regular"}, {"start": 150.28, "end": 152.76, "text": " schedule of two videos per week."}, {"start": 152.76, "end": 155.56, "text": " Lots of spectacular works are on our list."}, {"start": 155.56, "end": 156.56, "text": " Stay tuned."}, {"start": 156.56, "end": 161.8, "text": " Oh, and by the way, we have fellow scholars watching from all around the world, and in"}, {"start": 161.8, "end": 168.16, "text": " the meantime, some of them have started translating our episodes to German, Portuguese, Spanish,"}, {"start": 168.16, "end": 169.16, "text": " and Italian."}, {"start": 169.16, "end": 173.76, "text": " For some reason, I cannot see the names of the kind people who took their time to contribute."}, {"start": 173.76, "end": 176.6, "text": " So I'd like to kindly thank you for your work."}, {"start": 176.6, "end": 181.12, "text": " It makes two minute papers accessible to even more people, which is amazing."}, {"start": 181.12, "end": 205.08, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Mx8viOFKiIs
Sound Propagation With Adaptive Impulse Responses | Two Minute Papers #95
A realistic simulation of sounds within virtual environments dramatically improves the immersion of the user in computer games and virtual reality applications. To be able to simulate these effects, we need to compute the interaction between sound waves and the geometry and materials within the scene. Let's see how this work accomplishes it! ______________________________ The paper "Adaptive Impulse Response Modeling for Interactive Sound Propagation" is available here: http://gamma.cs.unc.edu/ADAPTIVEIR/ Recommended for you: Rocking Out With Convolutions - https://www.youtube.com/watch?v=JKYQOAZRZu4 All light-transport related episodes: https://www.youtube.com/playlist?list=PLujxSBD-JXgk1hb8lyu6sTYsLL39r_3bG WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was created by Robert - https://flic.kr/p/9YViuT Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Have you ever wondered how your voice or your guitar would sound in the middle of a space station? A realistic simulation of sounds with and virtual environments dramatically improves the immersion of the user in computer games and virtual reality applications. To be able to simulate these effects, we need to compute the interaction between sound waves and the geometry and materials within the scene. If you remember, we also had quite a few episodes about light simulations where we simulated the interaction of light waves or waves and the scene we have at hand. Sounds quite similar, right? Well, kind of. And the great thing is that we can reuse quite a bit of this knowledge and some of the equations for light transport for sound. This technique we call path tracing and it is one of the many well-known techniques used for sound simulation. We can use path tracing to simulate the path of many waves to obtain an impulse response, which is a simple mathematical function that describes the reverberation that we hear if we shoot a gun in a given scene, such as a space station or a church. After we obtain these impulse responses, we can use an operation called the convolution with our input signal like our voice to get a really convincing result. We have talked about this in more detail in earlier episodes as put a link for them in the video description box. It is important to note that the impulse response depends on the scene and where we, the listeners are exactly in the scene. Pretty much every concert ever, we find that sound reverberations are quite different in the middle of the arena versus standing at the back. One of the main contributions of this work is that it exploits temporal coherence. This means that even though the impulse response is different, if we stand at different places, but these locations don't change arbitrarily, and we can reuse a lot of information from the previous few impulse responses that we worked so hard to compute. This way, we can get away with tracing much fewer rays and still get high quality results. In the best cases, the algorithm executes 5 times as quickly as previous techniques and the memory requirements are significantly more favorable. The paper also contains a user study. Limitations include a bit overly smooth audio signals and some fidelity loss in the lower frequency domains. Some of these scenes in the footage showcase up to 24 distinct sound sources and all of them are simulated against the geometry and the materials found in the scene. So let's listen together and delight in these magnificent results. Thanks for watching and for your generous support, and I'll see you next time. Thanks for watching.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.0, "end": 12.0, "text": " Have you ever wondered how your voice or your guitar would sound in the middle of a space station?"}, {"start": 12.0, "end": 22.0, "text": " A realistic simulation of sounds with and virtual environments dramatically improves the immersion of the user in computer games and virtual reality applications."}, {"start": 22.0, "end": 30.0, "text": " To be able to simulate these effects, we need to compute the interaction between sound waves and the geometry and materials within the scene."}, {"start": 30.0, "end": 40.0, "text": " If you remember, we also had quite a few episodes about light simulations where we simulated the interaction of light waves or waves and the scene we have at hand."}, {"start": 40.0, "end": 44.0, "text": " Sounds quite similar, right? Well, kind of."}, {"start": 44.0, "end": 52.0, "text": " And the great thing is that we can reuse quite a bit of this knowledge and some of the equations for light transport for sound."}, {"start": 52.0, "end": 59.0, "text": " This technique we call path tracing and it is one of the many well-known techniques used for sound simulation."}, {"start": 59.0, "end": 73.0, "text": " We can use path tracing to simulate the path of many waves to obtain an impulse response, which is a simple mathematical function that describes the reverberation that we hear if we shoot a gun in a given scene,"}, {"start": 73.0, "end": 76.0, "text": " such as a space station or a church."}, {"start": 76.0, "end": 86.0, "text": " After we obtain these impulse responses, we can use an operation called the convolution with our input signal like our voice to get a really convincing result."}, {"start": 86.0, "end": 92.0, "text": " We have talked about this in more detail in earlier episodes as put a link for them in the video description box."}, {"start": 92.0, "end": 101.0, "text": " It is important to note that the impulse response depends on the scene and where we, the listeners are exactly in the scene."}, {"start": 101.0, "end": 110.0, "text": " Pretty much every concert ever, we find that sound reverberations are quite different in the middle of the arena versus standing at the back."}, {"start": 110.0, "end": 115.0, "text": " One of the main contributions of this work is that it exploits temporal coherence."}, {"start": 115.0, "end": 130.0, "text": " This means that even though the impulse response is different, if we stand at different places, but these locations don't change arbitrarily, and we can reuse a lot of information from the previous few impulse responses that we worked so hard to compute."}, {"start": 130.0, "end": 136.0, "text": " This way, we can get away with tracing much fewer rays and still get high quality results."}, {"start": 136.0, "end": 145.0, "text": " In the best cases, the algorithm executes 5 times as quickly as previous techniques and the memory requirements are significantly more favorable."}, {"start": 145.0, "end": 148.0, "text": " The paper also contains a user study."}, {"start": 148.0, "end": 155.0, "text": " Limitations include a bit overly smooth audio signals and some fidelity loss in the lower frequency domains."}, {"start": 155.0, "end": 166.0, "text": " Some of these scenes in the footage showcase up to 24 distinct sound sources and all of them are simulated against the geometry and the materials found in the scene."}, {"start": 166.0, "end": 195.0, "text": " So let's listen together and delight in these magnificent results."}, {"start": 196.0, "end": 220.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}, {"start": 226.0, "end": 240.0, "text": " Thanks for watching."}]
Two Minute Papers
https://www.youtube.com/watch?v=bLFISzfQCDQ
Estimating Matrix Rank With Neural Networks | Two Minute Papers #94
This tongue in cheek work is about identifying matrix ranks from images, plugging in a convolutional neural network where it is absolutely inaproppriate to use. The paper "Visually Identifying Rank" is available here: http://www.oneweirdkerneltrick.com/rank.pdf David Fouhey's website is available here: http://www.cs.cmu.edu/~dfouhey/ The machine learning calculator is available here: http://armlessjohn404.github.io/calcuMLator/ The paper "Separable Subsurface Scattering" is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ __________________________ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was created by Comfreak - https://pixabay.com/photo-356024/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojone Fahir. This piece of work is not meant to be a highly useful application, only a tongue-in-cheek jab at the rising trend of trying to solve simple problems using deep learning without carefully examining the problem at hand. As always, we note that all intuitive explanations are wrong, but some are helpful, and the most precise way to express these thoughts can be done by using mathematics. However, we shall leave that to the textbooks and we'll try to understand these concepts by floating about on the wings of intuition. In mathematics, a matrix is a rectangular array in which we can store numbers and symbols. Matrices can be interpreted in many ways, for instance, we can think of them as transformations. Multiplying a matrix without vector means applying this transform to the vector, such as scaling, rotation, or shearing. The rank of a matrix can be intuitively explained in many ways. My favorite intuition is that the rank encodes the information content of the matrix. For instance, in an earlier work on separable subsurface scattering, we recognize that many of these matrices that encode light scattering inside translucent materials are of relatively low rank. This means that the information within is highly structured and it is not random noise. And from this low rank property follows that we can compress and represent this phenomenon using simpler data structures, leading to an extremely efficient algorithm to simulate light scattering within our skin. However, the main point is that finding out the rank of a large matrix is an expensive operation. It is also important to note that we can also visualize these matrices by mapping the numbers within to different colors. As a fun side note, the paper finds that the uglier the color scheme is, the better suited it is for learning. This way, after computing the ranks of many matrices, we can create a lot of input images and output ranks for the neural network to learn on. After that, the goal is that we feed in an unknown matrix in the form of an image and the network would have to guess what the rank is. It is almost like having an expert scientist unleash his intuition on such a matrix, much like a fun guessing game for intoxicated mathematicians. And the ultimate question, as always, is how does this knowledge learned by the neural network generalize? The results are decent, but not spectacular, but they also offer some insights as to which matrices have surprising ranks. We can also try computing the products of matrices, which intuitively translates to guessing the result after we have done one transformation after the other, like the output of scaling after a rotation operation. They also try to compute the inverse of matrices for which the intuition can be undoing the transformation. If it is a rotation to a given direction, the inverse would be rotating back the exact same amount, or if we scaled something up, then scaling it back down would be its inverse. Of course, these are not the only operations that we can do with matrices, we only use these for the sake of demonstration. The lead author states on his website, these papers shows that, quote, linear algebra can be replaced with machine learning. End quote. Talk about being funny and tongue in cheek. Also, I have linked the website of David in the description box. He has a lot of great works, and I am surely not doing him justice by of all those great works covering this one. Rufus von Wufels, graduate of the prestigious Maddie Paws University, was the third author of the paper, overlooking the entirety of the work and making sure that the quality of the results is impeccable. As future work, I would propose replacing the basic mathematical operators, such as addition and multiplication by machine learning, except that it is already done and is hilariously fun and it even supports division by zero. Talk about the almighty powers of deep learning. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojone Fahir."}, {"start": 5.0, "end": 10.32, "text": " This piece of work is not meant to be a highly useful application, only a tongue-in-cheek"}, {"start": 10.32, "end": 15.8, "text": " jab at the rising trend of trying to solve simple problems using deep learning without"}, {"start": 15.8, "end": 18.44, "text": " carefully examining the problem at hand."}, {"start": 18.44, "end": 24.76, "text": " As always, we note that all intuitive explanations are wrong, but some are helpful, and the most"}, {"start": 24.76, "end": 29.400000000000002, "text": " precise way to express these thoughts can be done by using mathematics."}, {"start": 29.4, "end": 34.28, "text": " However, we shall leave that to the textbooks and we'll try to understand these concepts"}, {"start": 34.28, "end": 37.68, "text": " by floating about on the wings of intuition."}, {"start": 37.68, "end": 44.28, "text": " In mathematics, a matrix is a rectangular array in which we can store numbers and symbols."}, {"start": 44.28, "end": 50.76, "text": " Matrices can be interpreted in many ways, for instance, we can think of them as transformations."}, {"start": 50.76, "end": 56.2, "text": " Multiplying a matrix without vector means applying this transform to the vector, such as"}, {"start": 56.2, "end": 58.599999999999994, "text": " scaling, rotation, or shearing."}, {"start": 58.6, "end": 62.800000000000004, "text": " The rank of a matrix can be intuitively explained in many ways."}, {"start": 62.800000000000004, "end": 68.64, "text": " My favorite intuition is that the rank encodes the information content of the matrix."}, {"start": 68.64, "end": 73.84, "text": " For instance, in an earlier work on separable subsurface scattering, we recognize that many"}, {"start": 73.84, "end": 80.04, "text": " of these matrices that encode light scattering inside translucent materials are of relatively"}, {"start": 80.04, "end": 81.04, "text": " low rank."}, {"start": 81.04, "end": 86.76, "text": " This means that the information within is highly structured and it is not random noise."}, {"start": 86.76, "end": 92.36, "text": " And from this low rank property follows that we can compress and represent this phenomenon"}, {"start": 92.36, "end": 97.64, "text": " using simpler data structures, leading to an extremely efficient algorithm to simulate"}, {"start": 97.64, "end": 99.88000000000001, "text": " light scattering within our skin."}, {"start": 99.88000000000001, "end": 105.36000000000001, "text": " However, the main point is that finding out the rank of a large matrix is an expensive"}, {"start": 105.36000000000001, "end": 106.36000000000001, "text": " operation."}, {"start": 106.36000000000001, "end": 111.28, "text": " It is also important to note that we can also visualize these matrices by mapping the"}, {"start": 111.28, "end": 114.4, "text": " numbers within to different colors."}, {"start": 114.4, "end": 119.08000000000001, "text": " As a fun side note, the paper finds that the uglier the color scheme is, the better"}, {"start": 119.08000000000001, "end": 121.32000000000001, "text": " suited it is for learning."}, {"start": 121.32000000000001, "end": 127.04, "text": " This way, after computing the ranks of many matrices, we can create a lot of input images"}, {"start": 127.04, "end": 130.76, "text": " and output ranks for the neural network to learn on."}, {"start": 130.76, "end": 135.88, "text": " After that, the goal is that we feed in an unknown matrix in the form of an image and"}, {"start": 135.88, "end": 139.24, "text": " the network would have to guess what the rank is."}, {"start": 139.24, "end": 144.96, "text": " It is almost like having an expert scientist unleash his intuition on such a matrix, much"}, {"start": 144.96, "end": 149.36, "text": " like a fun guessing game for intoxicated mathematicians."}, {"start": 149.36, "end": 154.08, "text": " And the ultimate question, as always, is how does this knowledge learned by the neural"}, {"start": 154.08, "end": 156.08, "text": " network generalize?"}, {"start": 156.08, "end": 161.76000000000002, "text": " The results are decent, but not spectacular, but they also offer some insights as to which"}, {"start": 161.76000000000002, "end": 164.88, "text": " matrices have surprising ranks."}, {"start": 164.88, "end": 170.6, "text": " We can also try computing the products of matrices, which intuitively translates to guessing"}, {"start": 170.6, "end": 176.76, "text": " the result after we have done one transformation after the other, like the output of scaling"}, {"start": 176.76, "end": 179.35999999999999, "text": " after a rotation operation."}, {"start": 179.35999999999999, "end": 185.24, "text": " They also try to compute the inverse of matrices for which the intuition can be undoing the"}, {"start": 185.24, "end": 186.56, "text": " transformation."}, {"start": 186.56, "end": 191.84, "text": " If it is a rotation to a given direction, the inverse would be rotating back the exact"}, {"start": 191.84, "end": 198.20000000000002, "text": " same amount, or if we scaled something up, then scaling it back down would be its inverse."}, {"start": 198.20000000000002, "end": 203.16, "text": " Of course, these are not the only operations that we can do with matrices, we only use"}, {"start": 203.16, "end": 205.72, "text": " these for the sake of demonstration."}, {"start": 205.72, "end": 211.88, "text": " The lead author states on his website, these papers shows that, quote, linear algebra can"}, {"start": 211.88, "end": 214.68, "text": " be replaced with machine learning."}, {"start": 214.68, "end": 215.68, "text": " End quote."}, {"start": 215.68, "end": 218.36, "text": " Talk about being funny and tongue in cheek."}, {"start": 218.36, "end": 222.12, "text": " Also, I have linked the website of David in the description box."}, {"start": 222.12, "end": 227.92000000000002, "text": " He has a lot of great works, and I am surely not doing him justice by of all those great"}, {"start": 227.92000000000002, "end": 230.88000000000002, "text": " works covering this one."}, {"start": 230.88000000000002, "end": 237.48000000000002, "text": " Rufus von Wufels, graduate of the prestigious Maddie Paws University, was the third author"}, {"start": 237.48000000000002, "end": 242.84, "text": " of the paper, overlooking the entirety of the work and making sure that the quality of"}, {"start": 242.84, "end": 245.12, "text": " the results is impeccable."}, {"start": 245.12, "end": 250.56, "text": " As future work, I would propose replacing the basic mathematical operators, such as"}, {"start": 250.56, "end": 257.8, "text": " addition and multiplication by machine learning, except that it is already done and is hilariously"}, {"start": 257.8, "end": 260.96, "text": " fun and it even supports division by zero."}, {"start": 260.96, "end": 263.96, "text": " Talk about the almighty powers of deep learning."}, {"start": 263.96, "end": 283.96, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=CqFIVCD1WWo
WaveNet by Google DeepMind | Two Minute Papers #93
Let's talk about Google DeepMind's Wavenet! This piece of work is about generating audio waveforms for Text To Speech and more. Text To Speech basically means that we have a voice reading whatever we have written down. The difference in this work, is, however that it can synthesize these samples in someone's voice provided that we have training samples of this person speaking. __________________________ The paper "WaveNet: A Generative Model for Raw Audio" is available here: https://arxiv.org/abs/1609.03499 The blog post about this with the sound samples is available here: https://deepmind.com/blog/wavenet-generative-model-raw-audio/ The machine learning reddit thread about this paper is available here: https://www.reddit.com/r/MachineLearning/comments/51sr9t/deepmind_wavenet_a_generative_model_for_raw_audio/?ref=search_posts Recommended for you: Every Two Minute Papers episode on deep learning: https://www.youtube.com/playlist?list=PLujxSBD-JXglGL3ERdDOhthD3jTlfudC2 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Thanks so much to JulioC EA for the Spanish captions! :) Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was found on Pixabay - https://pixabay.com/hu/spektrum-hangsz%C3%ADnszab%C3%A1lyz%C3%B3-h%C3%A1tt%C3%A9r-545827/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. When I opened my inbox today, I was greeted by a huge dayage of messages about WaveNet. Well, first, it's great to see that so many people are excited about these inventions, and second, may all your wishes come true as quickly as this one. So here we go. This piece of work is about generating audio waveforms for text to speech and more. text to speech basically means that we have a voice reading whatever we have written down. The difference in this work is, however, that it can synthesize these samples in someone's voice, provided that we have training samples of this person speaking. The avocado is a pear-shaped fruit with leathery skin, smooth edible flesh and a large stone. The avocado is a pear-shaped fruit with leathery skin, smooth edible flesh and a large stone. The avocado is a pear-shaped fruit with leathery skin, smooth edible flesh and a large stone. The avocado is a pear-shaped fruit with leathery skin, smooth edible flesh and a large stone. It also generates waveforms, sample by sample, which is particularly perilous, because we typically need to produce these at the rate of 16 to 24,000 samples per second. And as we listen to the TV, radio and talk to each other several hours a day, the human ear and brain is particularly suited to processing this kind of signal. If the result is off by only the slightest amount, we immediately recognize it. It is not using a recurrent neural network, which is typically suited to learn sequences of things and is widely used for sound synthesis. It is using a convolutional neural network, which is quite surprising, because it is not meant to process sequences of data that change in time. However, this variant contains an extension that is able to do that. They call this extension, dilated convolutions, and they open up the possibility of making large skips in the input data so we have a better global view of it. If we were working in computer vision, it would be like increasing the receptive field of the eye so we can see the entire landscape and not only a tree on a photograph. It is also a bit like the temporal coherence problem we have talked about earlier. Taking all this into consideration results in more consistent outputs over larger time scales so the technique knows what it had done several seconds ago. Also, training a convolutional neural network is a walk in the park compared to a recurrent neural network. Really cool. And the results speed all existing widely used techniques by a large margin. One of these is the concatenative technique, which builds sentences from a huge amount of small speech fragments. These have seen a ton of improvements during the years, but the outputs are still robotic and it is noticeable that we are not listening to a human but a computer. The DeepMind guys also report that quote. Notice that non-speech sounds such as breathing and mouth movements are also sometimes generated by wavenet, this reflects the greater flexibility of a raw audio model. The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser. The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser. Aspects of the Sublime in English poetry and painting, 1770-1850. At the same time, I'd like to note that in the next few episodes, it may be that my voice is a bit different. But don't worry about that. It may also happen that I am on a vacation that new episodes and voice samples pop up on the channel, please don't worry about that either. Everything is working as intended. They also experimented with music generation and the results are just stunning. I don't know what to say. These difficult problems, these impenetrable walls crumble one after another as DeepMind takes on them. In sanity. Their blog posts and the paper are both really well written, make sure to check them out. They are both linked in the video description box. I wager that artistic style transfer for sound and instruments is not only coming, but it will be here soon. I imagine that we'll play a guitar and it will sound like a harp and we'll be able to sing something in Lady Gaga's voice and intonation. I've also seen someone pitching the idea of creating audiobooks automatically with such a technique. Wow. I travel a lot and I'm almost always on the go, so I personally would love to have such audio books. I have linked to the mentioned machine learning Reddit thread in the description box. As always, there's lots of great discussion and ideas there. It was also reported that the algorithm currently takes 90 minutes to synthesize one second of sound waveforms. You're not a drill, one follow a paper down the line, it will take only a few minutes, a few more papers down the line, it will be real time. Just think about all these advancements. What a time we are living in and I am extremely excited to present them all to you fellow scholars in two minute papers. Make sure to leave your thoughts and ideas in the comments section, we love reading them. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 5.0600000000000005, "end": 11.22, "text": " When I opened my inbox today, I was greeted by a huge dayage of messages about WaveNet."}, {"start": 11.22, "end": 16.62, "text": " Well, first, it's great to see that so many people are excited about these inventions,"}, {"start": 16.62, "end": 21.54, "text": " and second, may all your wishes come true as quickly as this one."}, {"start": 21.54, "end": 22.54, "text": " So here we go."}, {"start": 22.54, "end": 28.42, "text": " This piece of work is about generating audio waveforms for text to speech and more."}, {"start": 28.42, "end": 33.22, "text": " text to speech basically means that we have a voice reading whatever we have written down."}, {"start": 33.22, "end": 38.300000000000004, "text": " The difference in this work is, however, that it can synthesize these samples in someone's"}, {"start": 38.300000000000004, "end": 44.02, "text": " voice, provided that we have training samples of this person speaking."}, {"start": 44.02, "end": 50.18000000000001, "text": " The avocado is a pear-shaped fruit with leathery skin, smooth edible flesh and a large stone."}, {"start": 50.18000000000001, "end": 56.46, "text": " The avocado is a pear-shaped fruit with leathery skin, smooth edible flesh and a large stone."}, {"start": 56.46, "end": 62.54, "text": " The avocado is a pear-shaped fruit with leathery skin, smooth edible flesh and a large stone."}, {"start": 62.54, "end": 68.86, "text": " The avocado is a pear-shaped fruit with leathery skin, smooth edible flesh and a large stone."}, {"start": 68.86, "end": 74.34, "text": " It also generates waveforms, sample by sample, which is particularly perilous, because"}, {"start": 74.34, "end": 80.82, "text": " we typically need to produce these at the rate of 16 to 24,000 samples per second."}, {"start": 80.82, "end": 86.46, "text": " And as we listen to the TV, radio and talk to each other several hours a day, the human"}, {"start": 86.46, "end": 91.3, "text": " ear and brain is particularly suited to processing this kind of signal."}, {"start": 91.3, "end": 96.58, "text": " If the result is off by only the slightest amount, we immediately recognize it."}, {"start": 96.58, "end": 101.74, "text": " It is not using a recurrent neural network, which is typically suited to learn sequences"}, {"start": 101.74, "end": 105.61999999999999, "text": " of things and is widely used for sound synthesis."}, {"start": 105.61999999999999, "end": 110.78, "text": " It is using a convolutional neural network, which is quite surprising, because it is not"}, {"start": 110.78, "end": 114.7, "text": " meant to process sequences of data that change in time."}, {"start": 114.7, "end": 119.1, "text": " However, this variant contains an extension that is able to do that."}, {"start": 119.1, "end": 124.7, "text": " They call this extension, dilated convolutions, and they open up the possibility of making"}, {"start": 124.7, "end": 129.42000000000002, "text": " large skips in the input data so we have a better global view of it."}, {"start": 129.42000000000002, "end": 134.3, "text": " If we were working in computer vision, it would be like increasing the receptive field"}, {"start": 134.3, "end": 140.06, "text": " of the eye so we can see the entire landscape and not only a tree on a photograph."}, {"start": 140.06, "end": 144.54, "text": " It is also a bit like the temporal coherence problem we have talked about earlier."}, {"start": 144.54, "end": 150.2, "text": " Taking all this into consideration results in more consistent outputs over larger time"}, {"start": 150.2, "end": 154.82, "text": " scales so the technique knows what it had done several seconds ago."}, {"start": 154.82, "end": 161.02, "text": " Also, training a convolutional neural network is a walk in the park compared to a recurrent"}, {"start": 161.02, "end": 162.62, "text": " neural network."}, {"start": 162.62, "end": 163.86, "text": " Really cool."}, {"start": 163.86, "end": 168.82, "text": " And the results speed all existing widely used techniques by a large margin."}, {"start": 168.82, "end": 173.7, "text": " One of these is the concatenative technique, which builds sentences from a huge amount"}, {"start": 173.7, "end": 175.78, "text": " of small speech fragments."}, {"start": 175.78, "end": 180.85999999999999, "text": " These have seen a ton of improvements during the years, but the outputs are still robotic"}, {"start": 180.85999999999999, "end": 185.26, "text": " and it is noticeable that we are not listening to a human but a computer."}, {"start": 185.26, "end": 188.9, "text": " The DeepMind guys also report that quote."}, {"start": 188.9, "end": 195.45999999999998, "text": " Notice that non-speech sounds such as breathing and mouth movements are also sometimes generated"}, {"start": 195.46, "end": 202.9, "text": " by wavenet, this reflects the greater flexibility of a raw audio model."}, {"start": 202.9, "end": 209.38, "text": " The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser."}, {"start": 209.38, "end": 216.74, "text": " The Blue Lagoon is a 1980 American romance and adventure film directed by Randall Cliser."}, {"start": 216.74, "end": 222.82, "text": " Aspects of the Sublime in English poetry and painting, 1770-1850."}, {"start": 222.82, "end": 233.45999999999998, "text": " At the same time, I'd like to note that in the next few episodes, it may be that my"}, {"start": 233.45999999999998, "end": 235.5, "text": " voice is a bit different."}, {"start": 235.5, "end": 236.98, "text": " But don't worry about that."}, {"start": 236.98, "end": 242.18, "text": " It may also happen that I am on a vacation that new episodes and voice samples pop up"}, {"start": 242.18, "end": 245.78, "text": " on the channel, please don't worry about that either."}, {"start": 245.78, "end": 248.06, "text": " Everything is working as intended."}, {"start": 248.06, "end": 253.06, "text": " They also experimented with music generation and the results are just stunning."}, {"start": 278.3, "end": 291.38, "text": " I don't know what to say."}, {"start": 291.38, "end": 296.86, "text": " These difficult problems, these impenetrable walls crumble one after another as DeepMind"}, {"start": 296.86, "end": 298.18, "text": " takes on them."}, {"start": 298.18, "end": 299.18, "text": " In sanity."}, {"start": 299.18, "end": 304.18, "text": " Their blog posts and the paper are both really well written, make sure to check them out."}, {"start": 304.18, "end": 306.82, "text": " They are both linked in the video description box."}, {"start": 306.82, "end": 312.7, "text": " I wager that artistic style transfer for sound and instruments is not only coming, but"}, {"start": 312.7, "end": 314.02, "text": " it will be here soon."}, {"start": 314.02, "end": 318.62, "text": " I imagine that we'll play a guitar and it will sound like a harp and we'll be able to"}, {"start": 318.62, "end": 322.62, "text": " sing something in Lady Gaga's voice and intonation."}, {"start": 322.62, "end": 327.9, "text": " I've also seen someone pitching the idea of creating audiobooks automatically with such"}, {"start": 327.9, "end": 328.9, "text": " a technique."}, {"start": 328.9, "end": 329.9, "text": " Wow."}, {"start": 329.9, "end": 335.38, "text": " I travel a lot and I'm almost always on the go, so I personally would love to have such"}, {"start": 335.38, "end": 336.38, "text": " audio books."}, {"start": 336.38, "end": 340.46, "text": " I have linked to the mentioned machine learning Reddit thread in the description box."}, {"start": 340.46, "end": 343.9, "text": " As always, there's lots of great discussion and ideas there."}, {"start": 343.9, "end": 350.21999999999997, "text": " It was also reported that the algorithm currently takes 90 minutes to synthesize one second"}, {"start": 350.21999999999997, "end": 351.74, "text": " of sound waveforms."}, {"start": 351.74, "end": 356.65999999999997, "text": " You're not a drill, one follow a paper down the line, it will take only a few minutes,"}, {"start": 356.65999999999997, "end": 360.1, "text": " a few more papers down the line, it will be real time."}, {"start": 360.1, "end": 362.7, "text": " Just think about all these advancements."}, {"start": 362.7, "end": 367.78, "text": " What a time we are living in and I am extremely excited to present them all to you fellow"}, {"start": 367.78, "end": 370.06, "text": " scholars in two minute papers."}, {"start": 370.06, "end": 374.34, "text": " Make sure to leave your thoughts and ideas in the comments section, we love reading them."}, {"start": 374.34, "end": 397.26, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XmM1tF7AxdA
Automatic Hair Modeling from One Image | Two Minute Papers #92
This time, we are going to talk about hair modeling - obtaining hair geometry information from a photograph. This geometry information we can use in our movies and computer games. We can also run simulations on them and see how they look on a digital character. This is a remarkably difficult problem and you'll see a great solution to it in this episode. ____________________________________ The paper "AutoHair: Fully Automatic Hair Modeling from A Single Image" is available here: http://gaps-zju.org/mlchai/resources/chai2016autohair.pdf http://gaps-zju.org/mlchai/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was created by Betsy Jons - https://flic.kr/p/oye7FJ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifahir. A couple episodes ago, we finally set sail in the wonderful world of hair simulations, and today we shall continue our journey in this domain. But this time, we're going to talk about hair modeling. So first, what is the difference between hair simulation and modeling? Well, simulation is about trying to compute the physical forces that act on hair strands thereby showing to the user how they would move about in reality. Modeling, however, is about obtaining geometry information from a photograph. This geometry information we can use in our movies and computer games, we can also run simulations on them and see how they look on a digital character. Just think about it. The input is one photograph, and the output is a digital 3D model. This sounds like a remarkably difficult problem. Typically, something that a human would do quite well at, but it would be too labor intensive to do so for a large number of hairstyles. Therefore, as usual, neural networks enter the fray by looking at the photograph and trying to estimate the densities and distributions of the hair strands. The predicted results are then matched with the hairstyles found in public data repositories and the closest match is presented to the user. You can see some possible distribution classes here. Not only that, but the method is fully automatic, which means that unlike most previous works, it doesn't need any guidance from the user to accomplish this task. As a result, the authors created an enormous data set with 50,000 photographs and their reconstructions that they made freely available for everyone to use. The output results are so spectacular. It's almost as if we were seeing magic unfold before our eyes. The fact that we have so many hairstyles in this data set also opens up the possibility of editing, which is always quite a treat for artists working in the industry. The main limitation is a poorer reconstruction of regions that are not visible in the input photograph, but I think that goes without saying. It is slightly ameliorated by the fact that the public repositories contain hairstyles that make sense, so we can expect results of reasonable quality even for the regions we haven't seen in the input photograph. As always, please let me know below in the comments section whether you have found everything understandable in this episode. Was it easy to digest? Was there something that was not easy to follow? Your feedback, as always, is greatly appreciated. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifahir."}, {"start": 4.64, "end": 10.3, "text": " A couple episodes ago, we finally set sail in the wonderful world of hair simulations,"}, {"start": 10.3, "end": 14.24, "text": " and today we shall continue our journey in this domain."}, {"start": 14.24, "end": 17.68, "text": " But this time, we're going to talk about hair modeling."}, {"start": 17.68, "end": 22.5, "text": " So first, what is the difference between hair simulation and modeling?"}, {"start": 22.5, "end": 28.900000000000002, "text": " Well, simulation is about trying to compute the physical forces that act on hair strands"}, {"start": 28.9, "end": 33.0, "text": " thereby showing to the user how they would move about in reality."}, {"start": 33.0, "end": 38.3, "text": " Modeling, however, is about obtaining geometry information from a photograph."}, {"start": 38.3, "end": 42.6, "text": " This geometry information we can use in our movies and computer games,"}, {"start": 42.6, "end": 47.599999999999994, "text": " we can also run simulations on them and see how they look on a digital character."}, {"start": 47.599999999999994, "end": 54.2, "text": " Just think about it. The input is one photograph, and the output is a digital 3D model."}, {"start": 54.2, "end": 57.099999999999994, "text": " This sounds like a remarkably difficult problem."}, {"start": 57.1, "end": 60.5, "text": " Typically, something that a human would do quite well at,"}, {"start": 60.5, "end": 65.4, "text": " but it would be too labor intensive to do so for a large number of hairstyles."}, {"start": 65.4, "end": 70.5, "text": " Therefore, as usual, neural networks enter the fray by looking at the photograph"}, {"start": 70.5, "end": 75.1, "text": " and trying to estimate the densities and distributions of the hair strands."}, {"start": 75.1, "end": 81.1, "text": " The predicted results are then matched with the hairstyles found in public data repositories"}, {"start": 81.1, "end": 83.9, "text": " and the closest match is presented to the user."}, {"start": 83.9, "end": 86.9, "text": " You can see some possible distribution classes here."}, {"start": 86.9, "end": 90.30000000000001, "text": " Not only that, but the method is fully automatic,"}, {"start": 90.30000000000001, "end": 93.10000000000001, "text": " which means that unlike most previous works,"}, {"start": 93.10000000000001, "end": 96.9, "text": " it doesn't need any guidance from the user to accomplish this task."}, {"start": 96.9, "end": 103.30000000000001, "text": " As a result, the authors created an enormous data set with 50,000 photographs"}, {"start": 103.30000000000001, "end": 107.5, "text": " and their reconstructions that they made freely available for everyone to use."}, {"start": 107.5, "end": 110.7, "text": " The output results are so spectacular."}, {"start": 110.7, "end": 115.30000000000001, "text": " It's almost as if we were seeing magic unfold before our eyes."}, {"start": 115.3, "end": 118.7, "text": " The fact that we have so many hairstyles in this data set"}, {"start": 118.7, "end": 121.3, "text": " also opens up the possibility of editing,"}, {"start": 121.3, "end": 125.3, "text": " which is always quite a treat for artists working in the industry."}, {"start": 125.3, "end": 128.7, "text": " The main limitation is a poorer reconstruction of regions"}, {"start": 128.7, "end": 130.9, "text": " that are not visible in the input photograph,"}, {"start": 130.9, "end": 132.9, "text": " but I think that goes without saying."}, {"start": 132.9, "end": 138.5, "text": " It is slightly ameliorated by the fact that the public repositories contain hairstyles"}, {"start": 138.5, "end": 142.5, "text": " that make sense, so we can expect results of reasonable quality"}, {"start": 142.5, "end": 146.1, "text": " even for the regions we haven't seen in the input photograph."}, {"start": 146.1, "end": 149.1, "text": " As always, please let me know below in the comments section"}, {"start": 149.1, "end": 152.5, "text": " whether you have found everything understandable in this episode."}, {"start": 152.5, "end": 154.1, "text": " Was it easy to digest?"}, {"start": 154.1, "end": 156.7, "text": " Was there something that was not easy to follow?"}, {"start": 156.7, "end": 159.7, "text": " Your feedback, as always, is greatly appreciated."}, {"start": 159.7, "end": 162.3, "text": " Thanks for watching and for your generous support,"}, {"start": 162.3, "end": 181.3, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ksCSL6Ql0Yg
StyLit, Illumination-Guided Artistic Style Transfer | Two Minute Papers #91
Earlier, we have talked quite a bit about a fantastic new tool that we called artistic style transfer. This means that we have an input photograph that we'd like to modify, and another image from which we'd like to extract the artistic style. This way, we can, for instance, change our photo to look in the style of famous artists. Today, we're going to talk about a flamboyant little technique that is able to perform artistic style transfer in a way that preserves the illumination of the scene. ______________________________ The paper "StyLit: Illumination-Guided Example-Based Stylization of 3D Renderings" is available here: http://dcgi.fel.cvut.cz/home/sykorad/stylit Recommended for you: Artistic Style Transfer For Videos - https://www.youtube.com/watch?v=Uxax5EKg0zA Deep Neural Network Learns Van Gogh's Art - https://www.youtube.com/watch?v=-R9bJGNHltQ Neural Material Synthesis (this contains discussions on diffuse and specular materials)- https://www.youtube.com/watch?v=XpwW3glj2T8 WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was created by Irina - https://flic.kr/p/nWXd5H Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Earlier, we have talked quite a bit about a fantastic new tool that we called artistic style transfer. This means that we have an input photograph that we'd like to modify and another image from which we'd like to extract the artistic style. This way, we can, for instance, change our photo to look in the style of famous artists. Now, artists in the visual effects industry spend a lot of time designing the lighting and the illumination of their scenes, which is a long and arduous process. This is typically done in some kind of light simulation program, and if anyone thinks this is an easy and straightforward thing to do, I would definitely recommend trying it. After this lighting and the illumination step is done, we can apply some kind of artistic style transfer, but we shall quickly see that there is an insidious side effect to this process. If this regards, or even worse, destroys a illumination setup leading to results that look physically incorrect. Today, we're going to talk about a flamboyant little technique that is able to perform artistic style transfer in a way that preserves the illumination of the scene. These kinds of works are super important because they enable us to take the wheel from the hands of the neural networks that perform these operations and force our will on them. This way, we can have a greater control over what these neural networks do. Previous techniques take into consideration mostly color and normal information. Normal basically encode the shape of an object. However, these techniques don't really have a notion of illumination. They don't know that a reflection on an object should remain intact and they have no idea about the existence of shadows either. For instance, we have recently talked about diffuse and specular material models, and setting up this kind of illumination is something that artists in the industry are quite familiar with. The goal is that we can retain these features throughout the process of style transfer. In this work, the artist has given a printed image of a simple object like a sphere. This is no ordinary printed image because this image comes from a photo realistic rendering program which is augmented by additional information like what part of the image is a shadowed region and where the reflections are. And then, when the artist starts to add her own style to it, we know exactly what has been changed and how. This leads to a much more elaborate style transfer pipeline where the illumination stays intact. And the results are phenomenal. What is even more important, the usability of the solution is also beyond amazing. For instance, here, the artist can do the stylization on a simple sphere and get the artistic style to carry over to a complicated piece of geometry almost immediately. Temporal coherence is still to be improved, which means that if we try this on an animated sequence, it will be contaminated with flickering noise. We have talked about a work that does something similar for the old kind of style transfer. I've put a link in the video description box for that. I am sure that this kink will be worked out in no time. It's also interesting to note that the first style transfer paper was also published just a few months ago this year and we are already leveraging in excellent follow-up papers. I think this demonstrates the excitement of research quite aptly. The rate of progress in technology and algorithms are completely unmatched. Fresh, new ideas pop up every day and we can only frown and wander at their ingenuity. As usual, please let me know in the comment section whether you have found this episode interesting and understandable. If you felt that everything is fine here, that is also valuable feedback. Thank you. And by the way, if you wish to express your scholarly wisdom, our story is open with some amazing quality merch. Have a look. We also have a huge influx of people who became patrons recently. Welcome. Thank you so much for supporting two minute papers. We love you too. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 12.0, "text": " Earlier, we have talked quite a bit about a fantastic new tool that we called artistic style transfer."}, {"start": 12.0, "end": 20.5, "text": " This means that we have an input photograph that we'd like to modify and another image from which we'd like to extract the artistic style."}, {"start": 20.5, "end": 26.5, "text": " This way, we can, for instance, change our photo to look in the style of famous artists."}, {"start": 26.5, "end": 37.5, "text": " Now, artists in the visual effects industry spend a lot of time designing the lighting and the illumination of their scenes, which is a long and arduous process."}, {"start": 37.5, "end": 47.5, "text": " This is typically done in some kind of light simulation program, and if anyone thinks this is an easy and straightforward thing to do, I would definitely recommend trying it."}, {"start": 47.5, "end": 58.5, "text": " After this lighting and the illumination step is done, we can apply some kind of artistic style transfer, but we shall quickly see that there is an insidious side effect to this process."}, {"start": 58.5, "end": 66.5, "text": " If this regards, or even worse, destroys a illumination setup leading to results that look physically incorrect."}, {"start": 66.5, "end": 77.0, "text": " Today, we're going to talk about a flamboyant little technique that is able to perform artistic style transfer in a way that preserves the illumination of the scene."}, {"start": 77.0, "end": 88.0, "text": " These kinds of works are super important because they enable us to take the wheel from the hands of the neural networks that perform these operations and force our will on them."}, {"start": 88.0, "end": 92.5, "text": " This way, we can have a greater control over what these neural networks do."}, {"start": 92.5, "end": 97.5, "text": " Previous techniques take into consideration mostly color and normal information."}, {"start": 97.5, "end": 101.0, "text": " Normal basically encode the shape of an object."}, {"start": 101.0, "end": 105.0, "text": " However, these techniques don't really have a notion of illumination."}, {"start": 105.0, "end": 113.0, "text": " They don't know that a reflection on an object should remain intact and they have no idea about the existence of shadows either."}, {"start": 113.0, "end": 125.0, "text": " For instance, we have recently talked about diffuse and specular material models, and setting up this kind of illumination is something that artists in the industry are quite familiar with."}, {"start": 125.0, "end": 130.0, "text": " The goal is that we can retain these features throughout the process of style transfer."}, {"start": 130.0, "end": 136.0, "text": " In this work, the artist has given a printed image of a simple object like a sphere."}, {"start": 136.0, "end": 150.0, "text": " This is no ordinary printed image because this image comes from a photo realistic rendering program which is augmented by additional information like what part of the image is a shadowed region and where the reflections are."}, {"start": 150.0, "end": 158.0, "text": " And then, when the artist starts to add her own style to it, we know exactly what has been changed and how."}, {"start": 158.0, "end": 164.0, "text": " This leads to a much more elaborate style transfer pipeline where the illumination stays intact."}, {"start": 164.0, "end": 167.0, "text": " And the results are phenomenal."}, {"start": 167.0, "end": 173.0, "text": " What is even more important, the usability of the solution is also beyond amazing."}, {"start": 173.0, "end": 184.0, "text": " For instance, here, the artist can do the stylization on a simple sphere and get the artistic style to carry over to a complicated piece of geometry almost immediately."}, {"start": 184.0, "end": 193.0, "text": " Temporal coherence is still to be improved, which means that if we try this on an animated sequence, it will be contaminated with flickering noise."}, {"start": 193.0, "end": 201.0, "text": " We have talked about a work that does something similar for the old kind of style transfer. I've put a link in the video description box for that."}, {"start": 201.0, "end": 205.0, "text": " I am sure that this kink will be worked out in no time."}, {"start": 205.0, "end": 217.0, "text": " It's also interesting to note that the first style transfer paper was also published just a few months ago this year and we are already leveraging in excellent follow-up papers."}, {"start": 217.0, "end": 226.0, "text": " I think this demonstrates the excitement of research quite aptly. The rate of progress in technology and algorithms are completely unmatched."}, {"start": 226.0, "end": 232.0, "text": " Fresh, new ideas pop up every day and we can only frown and wander at their ingenuity."}, {"start": 232.0, "end": 238.0, "text": " As usual, please let me know in the comment section whether you have found this episode interesting and understandable."}, {"start": 238.0, "end": 243.0, "text": " If you felt that everything is fine here, that is also valuable feedback. Thank you."}, {"start": 243.0, "end": 251.0, "text": " And by the way, if you wish to express your scholarly wisdom, our story is open with some amazing quality merch."}, {"start": 251.0, "end": 261.0, "text": " Have a look. We also have a huge influx of people who became patrons recently. Welcome. Thank you so much for supporting two minute papers. We love you too."}, {"start": 261.0, "end": 265.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HvHZXPd0Bjs
Interactive Hair-Solid Simulations | Two Minute Papers #90
We have talked about fluid and cloth simulations earlier, but we never really set foot in the domain of hair simulations in the series. To obtain some footage of virtual hair movement, simulating the dynamics of hundreds of thousands of hair strands is clearly too time consuming and would be a flippant attempt to do so. In this episode, we discuss a technique to faithfully simulate 150 thousand hair strands by using only 400 guide hairs. ____________________________ The paper "Adaptive Skinning for Interactive Hair-Solid Simulation" is available here: http://gaps-zju.org/mlchai/resources/chai2016adaptive.pdf http://gaps-zju.org/mlchai/ Two Minute Papers offers great perks to supporters on Patreon: https://www.patreon.com/TwoMinutePapers WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was created by Faylyne (the image has been flipped and edited) - https://flic.kr/p/dayCjn Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojejolna Ifeher. We have talked about fluid and cloth simulations earlier, but we never really set foot in the domain of hair simulations in the series. To obtain some footage of virtual hair movement, simulating the dynamics of hundreds of thousands of hair strands is clearly two-time consuming and would be a flippant attempt to do so. If we do not wish to watch a simulation unfold with increasing this may as it would take hours of computation time to obtain just one second of footage, we have to come up with a cunning plan. A popular method to obtain detailed real-time hair simulations is not to compute the trajectory of every single hair strand, but to have a small set of strands that we call guide hairs. For these guide hairs, we compute everything. However, since this is a sparse set of elements, we have to fill the gaps with a large number of hair strands and we essentially try to guess how this should move based on the behavior of guide hairs near them. Essentially, one guide hair is responsible in guiding an entire batch or an entire braid of hair, if you will. This technique will like to call a reduced hair simulation and the guessing part is often referred to as interpolation. And the question immediately arises, how many guide hairs do we use and how many total hair strands can we simulate with them without our customers finding out that we are essentially cheating? The selling point of this piece of work is that it uses only 400 guide hairs and leaning on them, it can simulate up to a total number of 150,000 strands in real time. This leads to amazingly detailed hair simulations. My goodness, look at how beautiful these results are. Not only that, but as it is demonstrated quite aptly here, it can also faithfully handle rich interactions and collisions with other solid objects. For instance, we can simulate all kinds of combing or pulling our hair out, which is what most researchers do in their moments of great peril just before finding the solution to a difficult problem. Not only hair, but a researcher simulator, if you will. The results are compared to a full space simulation, which means simulating every single hair strand and that is exactly as time consuming as it sounds. The results are very close to being indistinguishable, which was not the case for previous works that created false intersections where hair strands would erroneously go through solid objects. We can also stroke bunnies with our hair models in this truly amazing piece of work. These episodes are also available in early access for our Patreon supporters. We also have plenty of other neat perks, I've put a link in the description box, make sure to have a look. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Karojejolna Ifeher."}, {"start": 4.88, "end": 9.96, "text": " We have talked about fluid and cloth simulations earlier, but we never really set foot in the"}, {"start": 9.96, "end": 12.84, "text": " domain of hair simulations in the series."}, {"start": 12.84, "end": 18.68, "text": " To obtain some footage of virtual hair movement, simulating the dynamics of hundreds of thousands"}, {"start": 18.68, "end": 24.080000000000002, "text": " of hair strands is clearly two-time consuming and would be a flippant attempt to do so."}, {"start": 24.080000000000002, "end": 29.240000000000002, "text": " If we do not wish to watch a simulation unfold with increasing this may as it would take"}, {"start": 29.24, "end": 34.6, "text": " hours of computation time to obtain just one second of footage, we have to come up with"}, {"start": 34.6, "end": 35.76, "text": " a cunning plan."}, {"start": 35.76, "end": 42.0, "text": " A popular method to obtain detailed real-time hair simulations is not to compute the trajectory"}, {"start": 42.0, "end": 48.8, "text": " of every single hair strand, but to have a small set of strands that we call guide hairs."}, {"start": 48.8, "end": 51.72, "text": " For these guide hairs, we compute everything."}, {"start": 51.72, "end": 57.08, "text": " However, since this is a sparse set of elements, we have to fill the gaps with a large number"}, {"start": 57.08, "end": 63.199999999999996, "text": " of hair strands and we essentially try to guess how this should move based on the behavior"}, {"start": 63.199999999999996, "end": 65.08, "text": " of guide hairs near them."}, {"start": 65.08, "end": 70.48, "text": " Essentially, one guide hair is responsible in guiding an entire batch or an entire braid"}, {"start": 70.48, "end": 72.16, "text": " of hair, if you will."}, {"start": 72.16, "end": 77.36, "text": " This technique will like to call a reduced hair simulation and the guessing part is often"}, {"start": 77.36, "end": 79.88, "text": " referred to as interpolation."}, {"start": 79.88, "end": 85.2, "text": " And the question immediately arises, how many guide hairs do we use and how many total"}, {"start": 85.2, "end": 90.92, "text": " hair strands can we simulate with them without our customers finding out that we are essentially"}, {"start": 90.92, "end": 91.92, "text": " cheating?"}, {"start": 91.92, "end": 97.4, "text": " The selling point of this piece of work is that it uses only 400 guide hairs and leaning"}, {"start": 97.4, "end": 105.68, "text": " on them, it can simulate up to a total number of 150,000 strands in real time."}, {"start": 105.68, "end": 109.24000000000001, "text": " This leads to amazingly detailed hair simulations."}, {"start": 109.24000000000001, "end": 113.48, "text": " My goodness, look at how beautiful these results are."}, {"start": 113.48, "end": 118.92, "text": " Not only that, but as it is demonstrated quite aptly here, it can also faithfully handle"}, {"start": 118.92, "end": 122.88000000000001, "text": " rich interactions and collisions with other solid objects."}, {"start": 122.88000000000001, "end": 127.88000000000001, "text": " For instance, we can simulate all kinds of combing or pulling our hair out, which is what"}, {"start": 127.88000000000001, "end": 133.48000000000002, "text": " most researchers do in their moments of great peril just before finding the solution to"}, {"start": 133.48000000000002, "end": 134.96, "text": " a difficult problem."}, {"start": 134.96, "end": 140.16, "text": " Not only hair, but a researcher simulator, if you will."}, {"start": 140.16, "end": 146.07999999999998, "text": " The results are compared to a full space simulation, which means simulating every single hair"}, {"start": 146.07999999999998, "end": 150.35999999999999, "text": " strand and that is exactly as time consuming as it sounds."}, {"start": 150.35999999999999, "end": 155.84, "text": " The results are very close to being indistinguishable, which was not the case for previous works"}, {"start": 155.84, "end": 162.24, "text": " that created false intersections where hair strands would erroneously go through solid objects."}, {"start": 162.24, "end": 167.64, "text": " We can also stroke bunnies with our hair models in this truly amazing piece of work."}, {"start": 167.64, "end": 172.11999999999998, "text": " These episodes are also available in early access for our Patreon supporters."}, {"start": 172.11999999999998, "end": 176.48, "text": " We also have plenty of other neat perks, I've put a link in the description box, make"}, {"start": 176.48, "end": 177.64, "text": " sure to have a look."}, {"start": 177.64, "end": 200.64, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cVZzkSaxKmY
3D Printing With Filigree Patterns | Two Minute Papers #89
Filigrees are detailed, thin patterns typically found in jewelry, fabrics and ornaments, and as you may imagine, crafting such motifs on objects is incredibly laborious. This project is about leaving out the craftsmen from the equation by choosing a set of target filigree patterns and creating a complex shape out of them that can be easily 3D printed. The challenge lies in grouping and packing up these patterns to fill a surface evenly. Let's see what this piece of work has to say about the problem! ____________________ The paper "Synthesis of Filigrees for Digital Fabrication" is available here: http://i.cs.hku.hk/~wkchen/projects/proj_sig16.html Recommended for you: Our earlier video on optimization - https://www.youtube.com/watch?v=1ypV5ZiIbdA WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was taken from the corresponding paper linked above. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karoje Zsolnai-Fehir. Filigrees are detailed, thin patterns typically found in jewelry, fabrics, and ornaments, and as you may imagine, crafting such motives on objects is incredibly laborious. This project is about leaving out the craftsmen from the equation by choosing a set of target filigree patterns and creating a complex shape out of them that can be easily 3D printed. The challenge lies in grouping and packing up these patterns to fill a surface evenly. We start out with a base model with a poor structure which is not completely random, but as you can see, it's quite a fore-learn effort. In several subsequent steps, we try to adjust the positions and shapes of the filigree elements to achieve more pleasing results. The more pleasing results we define as one that minimizes the amount of overlapping and maximizes the connectivity of the final shape. Sounds like an optimization problem from earlier. And, that is exactly what it is. Really cool, right? The optimization procedure itself is far from trivial and the paper discusses possible challenges and their solutions in detail. For instance, the fact that we can also add control fields to describe our vision regarding the size and orientation of the filigree patterns is an additional burden that the optimizer has to deal with. We can also specify the ratio of the different input filigree elements that we'd like to see added to the model. The results are compared to previous work and the difference speaks for itself. However, it's important to point out that even this thing that we call previous work was still published this year. Talk about rapid progress in research. Absolutely phenomenal work. The evaluation and the execution of the solution as described in the paper is also second to none. Make sure to have a look. And thank you so much for taking the time to comment on our earlier video about the complexity of the series. I'd like to assure you that we read every single comment and found a ton of super helpful feedback there. It seems to me that the vast majority of you agree that a simple overlay text does the job and while it is there, it's even better to make it clickable so it leads to a video that explains the concept in a bit more detail for the more curious minds out there. I'll try to make sure that everything is available in mobile as well. You fellow scholars are the best and thank you so much for everyone for leaving a comment. Also, please let me know in the comments section if you have found this episode to be understandable or if there were any terms that you've never heard of. If everything was in order, that's also valuable information so make sure to leave a comment. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karoje Zsolnai-Fehir."}, {"start": 5.0, "end": 11.5, "text": " Filigrees are detailed, thin patterns typically found in jewelry, fabrics, and ornaments,"}, {"start": 11.5, "end": 17.0, "text": " and as you may imagine, crafting such motives on objects is incredibly laborious."}, {"start": 17.0, "end": 22.5, "text": " This project is about leaving out the craftsmen from the equation by choosing a set of target"}, {"start": 22.5, "end": 28.8, "text": " filigree patterns and creating a complex shape out of them that can be easily 3D printed."}, {"start": 28.8, "end": 33.8, "text": " The challenge lies in grouping and packing up these patterns to fill a surface evenly."}, {"start": 33.8, "end": 39.2, "text": " We start out with a base model with a poor structure which is not completely random,"}, {"start": 39.2, "end": 42.3, "text": " but as you can see, it's quite a fore-learn effort."}, {"start": 42.3, "end": 48.0, "text": " In several subsequent steps, we try to adjust the positions and shapes of the filigree elements"}, {"start": 48.0, "end": 50.400000000000006, "text": " to achieve more pleasing results."}, {"start": 50.400000000000006, "end": 55.900000000000006, "text": " The more pleasing results we define as one that minimizes the amount of overlapping and"}, {"start": 55.9, "end": 58.9, "text": " maximizes the connectivity of the final shape."}, {"start": 58.9, "end": 61.9, "text": " Sounds like an optimization problem from earlier."}, {"start": 61.9, "end": 64.9, "text": " And, that is exactly what it is."}, {"start": 64.9, "end": 65.9, "text": " Really cool, right?"}, {"start": 65.9, "end": 71.9, "text": " The optimization procedure itself is far from trivial and the paper discusses possible"}, {"start": 71.9, "end": 74.9, "text": " challenges and their solutions in detail."}, {"start": 74.9, "end": 80.9, "text": " For instance, the fact that we can also add control fields to describe our vision regarding"}, {"start": 80.9, "end": 86.30000000000001, "text": " the size and orientation of the filigree patterns is an additional burden that the optimizer"}, {"start": 86.30000000000001, "end": 87.60000000000001, "text": " has to deal with."}, {"start": 87.60000000000001, "end": 92.7, "text": " We can also specify the ratio of the different input filigree elements that we'd like to see"}, {"start": 92.7, "end": 94.10000000000001, "text": " added to the model."}, {"start": 94.10000000000001, "end": 98.9, "text": " The results are compared to previous work and the difference speaks for itself."}, {"start": 98.9, "end": 104.38000000000001, "text": " However, it's important to point out that even this thing that we call previous work"}, {"start": 104.38000000000001, "end": 106.66000000000001, "text": " was still published this year."}, {"start": 106.66000000000001, "end": 109.5, "text": " Talk about rapid progress in research."}, {"start": 109.5, "end": 111.5, "text": " Absolutely phenomenal work."}, {"start": 111.5, "end": 117.0, "text": " The evaluation and the execution of the solution as described in the paper is also second"}, {"start": 117.0, "end": 118.0, "text": " to none."}, {"start": 118.0, "end": 119.5, "text": " Make sure to have a look."}, {"start": 119.5, "end": 124.38, "text": " And thank you so much for taking the time to comment on our earlier video about the"}, {"start": 124.38, "end": 126.26, "text": " complexity of the series."}, {"start": 126.26, "end": 132.02, "text": " I'd like to assure you that we read every single comment and found a ton of super helpful"}, {"start": 132.02, "end": 133.02, "text": " feedback there."}, {"start": 133.02, "end": 138.1, "text": " It seems to me that the vast majority of you agree that a simple overlay text does the"}, {"start": 138.1, "end": 143.57999999999998, "text": " job and while it is there, it's even better to make it clickable so it leads to a video"}, {"start": 143.57999999999998, "end": 148.57999999999998, "text": " that explains the concept in a bit more detail for the more curious minds out there."}, {"start": 148.57999999999998, "end": 152.18, "text": " I'll try to make sure that everything is available in mobile as well."}, {"start": 152.18, "end": 157.18, "text": " You fellow scholars are the best and thank you so much for everyone for leaving a comment."}, {"start": 157.18, "end": 162.45999999999998, "text": " Also, please let me know in the comments section if you have found this episode to be understandable"}, {"start": 162.45999999999998, "end": 165.5, "text": " or if there were any terms that you've never heard of."}, {"start": 165.5, "end": 170.78, "text": " If everything was in order, that's also valuable information so make sure to leave a comment."}, {"start": 170.78, "end": 195.06, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XpwW3glj2T8
Neural Material Synthesis | Two Minute Papers #88
We are going to talk about techniques that create physically based material models from photographs that we can use in our light simulation programs. In an earlier work, two photographs are required for high-quality reconstruction. It seems that working from only one photograph doesn't seem possible at all. However, with the power of deep learning... ___________________________________ The paper "Two-Shot SVBRDF Capture for Stationary Materials" is available here: https://mediatech.aalto.fi/publications/graphics/TwoShotSVBRDF/ The paper "Reflectance Modeling by Neural Texture Synthesis" is available here: https://mediatech.aalto.fi/publications/graphics/NeuralSVBRDF/ NVIDIA has implemented the two-shot model! Have a look: https://twitter.com/karoly_zsolnai/status/839570124017438726 Our earlier episode on Gradient Domain Light Transport is available here: https://www.youtube.com/watch?v=sSnDTPjfBYU The light transport course at the Technical University of Vienna is available here: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Image credits: Sherrie Thai - https://flic.kr/p/7pNLqB Giulio Bernardi - https://flic.kr/p/b2GMJ Andy Beatty - https://flic.kr/p/7j3dUX Open Grid Scheduler - https://flic.kr/p/Gdg3JC liz west - https://flic.kr/p/rAbED http://collagefactory.blogspot.hu/2010/04/brdf-for-diffuseglossyspecular.html Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifehir. If you are new here, this is a series about research with the name Two Minute Papers, but let's be honest here, it's never two minutes. We are going to talk about two really cool papers that help us create physically based material models from photographs that we can use in our live simulation programs. Just as a note, these authors, Mika and Diaco have been on a rampage for years now and have popped so many fantastic papers, each of which I was blown away by. For instance, earlier, we talked about their work on gradient domain light transport. Brilliant piece of work. I've put a link in the description box, make sure to check it out. So the problem we are trying to solve is very simple to understand. The input is a photograph of a given material somewhere in our vicinity and the output is a bona fide physical material model that we can use in our photorealistic rendering program. We can import real world materials in our virtual worlds, if you will. Before we proceed, let's define a few mandatory terms. A material is diffuse if incoming light from one direction is reflected equally in all directions. This means that they look the same from all directions. White walls and matte surfaces are excellent examples of that. A material we shall consider specular if incoming light from one direction is reflected back to one direction. This means that if we turn our head a bit, we will see something different. For instance, the windshield of a car, water and reflections in the mirror can be visualized with a specular material model. Of course, materials can also be a combination of both. For instance, car paint, our hair and skin are all combinations of these material models. Glossy materials are midway between the two where the incoming light from one direction is reflected to not everywhere equally and not in one direction, but a small selected set of directions. They change a bit when we move our head, but not that much. In the two-shot capture paper, a material model is given by how much light is reflected and absorbed by the diffuse and specular components of the material and something that we call a normal map which captures the bumpiness of the material. Other factors like glossiness and anisotropy are also recorded, but we shall focus on the diffuse and specular parts. The authors ask us to grab our phone for two photographs of a material to ensure a high quality reconstruction procedure, one with flash and one without. And the question immediately arises, why two images? Well, the image without flash can capture the component that looks the same from all directions. This is the diffuse component and a photograph with flash can capture the specular component because we can see how the material handles specular reflections. And it is needless to say, the presented results are absolutely fantastic. So first paper, two images, one material model. And therein lies the problem which they tried to address in the second paper. If a computer looks at such an image, it doesn't know which part of one photograph is a diffuse and which is the specular reflection. However, I remember sitting in the waiting room of a hospital while reading the first paper and this waiting room had a tiled glossy wall and I was thinking that one image should be enough because if I look at something, I can easily discern what the diffuse colors are and which part is the specular reflection of something else. I don't need multiple photographs for that. I can also immediately see how bumpy it is even from one photograph. I don't need to turn my head around. This is because we humans have not a mathematical but an intuitive understanding of the materials we see around us. So can we explain the same kind of understanding of materials to a computer somehow? Can we do it with only one image? And the answer is yes we can and hopefully we already feel the alluring call of murals networks. We can get a neural network that was trained on a lot of different images to try to guess what these material reflectance parameters should look like. However, the output should not be one image but multiple images with the diffuse and specular reflectance information and the normal map to describe the bumpiness of this surface. Merely throwing a neural network at this problem is however not sufficient. There needs to be some kind of conspiracy between these images because real materials are not arbitrarily put together. If one of these images is smooth or has interesting features somewhere, the others have to follow it in some way. This some way is mathematically quite challenging to formulate which is a really cool part of the paper. This conspiracy part is a bit like if we had four criminals testifying at a trial where they try to sell their lies and to maintain the credibility of their made up story, they have previously had to synchronize their lies so they line up correctly. The paper contains neat tricks to control the output of the neural network and create these conspiracies across multiple image outputs that yield a valid and believable material model. And the results are again just fantastic. Second paper, one image, one material model. It doesn't get any better than that. Spectacular, not specular, spectacular piece of work. The first paper is great, but the second is smoking hot. By all that is holy, I'm getting goosebumps. If you are interested in hearing a bit more about light transport and are not afraid of some mathematics, we recently recorded my full course on this at the Technical University of Vienna, the entirety of which is freely available for everyone. There's a link for it in the description box, make sure to check it out. Thanks for watching and get for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifehir."}, {"start": 4.64, "end": 9.72, "text": " If you are new here, this is a series about research with the name Two Minute Papers, but"}, {"start": 9.72, "end": 13.16, "text": " let's be honest here, it's never two minutes."}, {"start": 13.16, "end": 18.44, "text": " We are going to talk about two really cool papers that help us create physically based"}, {"start": 18.44, "end": 24.0, "text": " material models from photographs that we can use in our live simulation programs."}, {"start": 24.0, "end": 29.52, "text": " Just as a note, these authors, Mika and Diaco have been on a rampage for years now and"}, {"start": 29.52, "end": 34.96, "text": " have popped so many fantastic papers, each of which I was blown away by."}, {"start": 34.96, "end": 39.96, "text": " For instance, earlier, we talked about their work on gradient domain light transport."}, {"start": 39.96, "end": 41.12, "text": " Brilliant piece of work."}, {"start": 41.12, "end": 44.24, "text": " I've put a link in the description box, make sure to check it out."}, {"start": 44.24, "end": 47.760000000000005, "text": " So the problem we are trying to solve is very simple to understand."}, {"start": 47.760000000000005, "end": 53.4, "text": " The input is a photograph of a given material somewhere in our vicinity and the output is"}, {"start": 53.4, "end": 59.4, "text": " a bona fide physical material model that we can use in our photorealistic rendering program."}, {"start": 59.4, "end": 64.2, "text": " We can import real world materials in our virtual worlds, if you will."}, {"start": 64.2, "end": 67.48, "text": " Before we proceed, let's define a few mandatory terms."}, {"start": 67.48, "end": 73.75999999999999, "text": " A material is diffuse if incoming light from one direction is reflected equally in all"}, {"start": 73.75999999999999, "end": 75.24, "text": " directions."}, {"start": 75.24, "end": 78.32, "text": " This means that they look the same from all directions."}, {"start": 78.32, "end": 82.08, "text": " White walls and matte surfaces are excellent examples of that."}, {"start": 82.08, "end": 87.44, "text": " A material we shall consider specular if incoming light from one direction is reflected"}, {"start": 87.44, "end": 89.52, "text": " back to one direction."}, {"start": 89.52, "end": 93.88, "text": " This means that if we turn our head a bit, we will see something different."}, {"start": 93.88, "end": 99.44, "text": " For instance, the windshield of a car, water and reflections in the mirror can be visualized"}, {"start": 99.44, "end": 101.28, "text": " with a specular material model."}, {"start": 101.28, "end": 104.75999999999999, "text": " Of course, materials can also be a combination of both."}, {"start": 104.75999999999999, "end": 111.08, "text": " For instance, car paint, our hair and skin are all combinations of these material models."}, {"start": 111.08, "end": 115.88, "text": " Glossy materials are midway between the two where the incoming light from one direction"}, {"start": 115.88, "end": 121.52, "text": " is reflected to not everywhere equally and not in one direction, but a small selected"}, {"start": 121.52, "end": 123.11999999999999, "text": " set of directions."}, {"start": 123.11999999999999, "end": 126.6, "text": " They change a bit when we move our head, but not that much."}, {"start": 126.6, "end": 132.56, "text": " In the two-shot capture paper, a material model is given by how much light is reflected"}, {"start": 132.56, "end": 138.24, "text": " and absorbed by the diffuse and specular components of the material and something that we call"}, {"start": 138.24, "end": 142.32, "text": " a normal map which captures the bumpiness of the material."}, {"start": 142.32, "end": 147.48, "text": " Other factors like glossiness and anisotropy are also recorded, but we shall focus on the"}, {"start": 147.48, "end": 149.44, "text": " diffuse and specular parts."}, {"start": 149.44, "end": 155.28, "text": " The authors ask us to grab our phone for two photographs of a material to ensure a high"}, {"start": 155.28, "end": 160.12, "text": " quality reconstruction procedure, one with flash and one without."}, {"start": 160.12, "end": 164.12, "text": " And the question immediately arises, why two images?"}, {"start": 164.12, "end": 170.35999999999999, "text": " Well, the image without flash can capture the component that looks the same from all directions."}, {"start": 170.36, "end": 175.96, "text": " This is the diffuse component and a photograph with flash can capture the specular component"}, {"start": 175.96, "end": 180.24, "text": " because we can see how the material handles specular reflections."}, {"start": 180.24, "end": 185.4, "text": " And it is needless to say, the presented results are absolutely fantastic."}, {"start": 185.4, "end": 189.60000000000002, "text": " So first paper, two images, one material model."}, {"start": 189.60000000000002, "end": 194.20000000000002, "text": " And therein lies the problem which they tried to address in the second paper."}, {"start": 194.20000000000002, "end": 199.68, "text": " If a computer looks at such an image, it doesn't know which part of one photograph"}, {"start": 199.68, "end": 202.76000000000002, "text": " is a diffuse and which is the specular reflection."}, {"start": 202.76000000000002, "end": 208.04000000000002, "text": " However, I remember sitting in the waiting room of a hospital while reading the first paper"}, {"start": 208.04000000000002, "end": 213.6, "text": " and this waiting room had a tiled glossy wall and I was thinking that one image should"}, {"start": 213.6, "end": 218.92000000000002, "text": " be enough because if I look at something, I can easily discern what the diffuse colors"}, {"start": 218.92000000000002, "end": 222.68, "text": " are and which part is the specular reflection of something else."}, {"start": 222.68, "end": 225.24, "text": " I don't need multiple photographs for that."}, {"start": 225.24, "end": 229.96, "text": " I can also immediately see how bumpy it is even from one photograph."}, {"start": 229.96, "end": 232.36, "text": " I don't need to turn my head around."}, {"start": 232.36, "end": 238.60000000000002, "text": " This is because we humans have not a mathematical but an intuitive understanding of the materials"}, {"start": 238.60000000000002, "end": 240.28, "text": " we see around us."}, {"start": 240.28, "end": 246.08, "text": " So can we explain the same kind of understanding of materials to a computer somehow?"}, {"start": 246.08, "end": 249.20000000000002, "text": " Can we do it with only one image?"}, {"start": 249.20000000000002, "end": 255.20000000000002, "text": " And the answer is yes we can and hopefully we already feel the alluring call of murals"}, {"start": 255.2, "end": 256.28, "text": " networks."}, {"start": 256.28, "end": 261.36, "text": " We can get a neural network that was trained on a lot of different images to try to guess"}, {"start": 261.36, "end": 264.68, "text": " what these material reflectance parameters should look like."}, {"start": 264.68, "end": 271.0, "text": " However, the output should not be one image but multiple images with the diffuse and specular"}, {"start": 271.0, "end": 276.8, "text": " reflectance information and the normal map to describe the bumpiness of this surface."}, {"start": 276.8, "end": 281.59999999999997, "text": " Merely throwing a neural network at this problem is however not sufficient."}, {"start": 281.6, "end": 286.64000000000004, "text": " There needs to be some kind of conspiracy between these images because real materials are"}, {"start": 286.64000000000004, "end": 288.84000000000003, "text": " not arbitrarily put together."}, {"start": 288.84000000000003, "end": 294.24, "text": " If one of these images is smooth or has interesting features somewhere, the others have to follow"}, {"start": 294.24, "end": 295.92, "text": " it in some way."}, {"start": 295.92, "end": 301.12, "text": " This some way is mathematically quite challenging to formulate which is a really cool part"}, {"start": 301.12, "end": 302.36, "text": " of the paper."}, {"start": 302.36, "end": 307.96000000000004, "text": " This conspiracy part is a bit like if we had four criminals testifying at a trial where"}, {"start": 307.96, "end": 312.96, "text": " they try to sell their lies and to maintain the credibility of their made up story, they"}, {"start": 312.96, "end": 317.79999999999995, "text": " have previously had to synchronize their lies so they line up correctly."}, {"start": 317.79999999999995, "end": 322.79999999999995, "text": " The paper contains neat tricks to control the output of the neural network and create"}, {"start": 322.79999999999995, "end": 328.56, "text": " these conspiracies across multiple image outputs that yield a valid and believable material"}, {"start": 328.56, "end": 329.56, "text": " model."}, {"start": 329.56, "end": 333.56, "text": " And the results are again just fantastic."}, {"start": 333.56, "end": 336.91999999999996, "text": " Second paper, one image, one material model."}, {"start": 336.92, "end": 339.72, "text": " It doesn't get any better than that."}, {"start": 339.72, "end": 344.24, "text": " Spectacular, not specular, spectacular piece of work."}, {"start": 344.24, "end": 348.24, "text": " The first paper is great, but the second is smoking hot."}, {"start": 348.24, "end": 354.52000000000004, "text": " By all that is holy, I'm getting goosebumps."}, {"start": 354.52000000000004, "end": 358.48, "text": " If you are interested in hearing a bit more about light transport and are not afraid of"}, {"start": 358.48, "end": 363.48, "text": " some mathematics, we recently recorded my full course on this at the Technical University"}, {"start": 363.48, "end": 368.12, "text": " of Vienna, the entirety of which is freely available for everyone."}, {"start": 368.12, "end": 372.64000000000004, "text": " There's a link for it in the description box, make sure to check it out."}, {"start": 372.64, "end": 396.64, "text": " Thanks for watching and get for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=heB2tD0-r-c
On the Complexity of Two Minute Papers | Two Minute Papers #87
There are some minor changes coming to Two Minute Papers, and I am trying my very best to make it as enjoyable as possible to you, so I would really like to hear your opinion on an issue. The earlier episode showcased in the video: Schrödinger's Smoke - https://www.youtube.com/watch?v=heY2gfXSHBo WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ The thumbnail background image was created by Tulip Vorlax - https://flic.kr/p/84QwGn Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This is not an episode about a paper, but it's about the series itself. There are some minor changes coming, and I'm trying my very best to make it as enjoyable as possible to you, so I would really like to hear your opinion on an issue. Many of our episodes are on new topics where I'm trying my best to cover the basics so that the scope of a new research work can be understood clearly. However, as we are continuing our journey deeper into the depths of state-of-the-art research, it inevitably happens that we have to build on already existing knowledge from earlier episodes. The big question is how we should handle such cases. For instance, in the case of a neural network paper, the solution we went for so far was having a quick recap for what a neural network is. We can either have this recap in every episode about, for instance, neural networks, fluid simulations, or photorealistic rendering, and be insidiously annoying to our seasoned Fellow Scholars who know it all. Or, we don't talk about the preliminaries to cater to the more seasoned Fellow Scholars out there at the expense of new people who are locked out of the conversation as they may be watching their very first episode of two-minute papers. So the goal is clear, I'd like the episodes to be as easily understandable as possible, but while keeping the narrative intact so that every term I use is explained in the episode. First, I was thinking about handing out a so-called dictionary in the video description box where all of these terms would be explained briefly. At first, this sounded like a good idea, but most people knew to the series would likely not know about it, and for them, the fact that these episodes are not self-contained anymore would perhaps be confusing or even worse, repulsive. The next idea was that perhaps, instead of re-explaining these terms over and over again, we could add an overlay text in the video for them. The more seasoned Fellow Scholars won't be held up because they know what a Lagrangian fluid simulation is, but someone new to the series could also catch up easily just by reading a line of text that pops up. I think this one would be a formidable solution. I would love to know your opinion on these possible solutions. I personally think that the overlay text is the best, but who knows, maybe a better idea gets raised. Please make sure to let me know below in the comment section whether you have started watching two-minute papers recently or maybe you are a seasoned Fellow scholar and how you feel about the issue. Have you ever encountered terms that you didn't understand? Or was it the opposite? Am I beating a dead horse with re-explaining all this simple stuff? I'd like to make these episodes the best I possibly can so that the seasoned Fellow Scholars and people new to the show alike can marvel at the wonders of research. All feedback is welcome and please make sure to leave a comment so I can better understand how you feel about this issue and what would make you happier. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.96, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.96, "end": 9.48, "text": " This is not an episode about a paper, but it's about the series itself."}, {"start": 9.48, "end": 14.32, "text": " There are some minor changes coming, and I'm trying my very best to make it as enjoyable"}, {"start": 14.32, "end": 19.240000000000002, "text": " as possible to you, so I would really like to hear your opinion on an issue."}, {"start": 19.240000000000002, "end": 24.48, "text": " Many of our episodes are on new topics where I'm trying my best to cover the basics so that"}, {"start": 24.48, "end": 28.32, "text": " the scope of a new research work can be understood clearly."}, {"start": 28.32, "end": 34.0, "text": " However, as we are continuing our journey deeper into the depths of state-of-the-art research,"}, {"start": 34.0, "end": 38.88, "text": " it inevitably happens that we have to build on already existing knowledge from earlier"}, {"start": 38.88, "end": 39.96, "text": " episodes."}, {"start": 39.96, "end": 43.4, "text": " The big question is how we should handle such cases."}, {"start": 43.4, "end": 49.120000000000005, "text": " For instance, in the case of a neural network paper, the solution we went for so far was having"}, {"start": 49.120000000000005, "end": 52.08, "text": " a quick recap for what a neural network is."}, {"start": 52.08, "end": 58.64, "text": " We can either have this recap in every episode about, for instance, neural networks, fluid simulations,"}, {"start": 58.64, "end": 63.8, "text": " or photorealistic rendering, and be insidiously annoying to our seasoned Fellow Scholars who"}, {"start": 63.8, "end": 64.8, "text": " know it all."}, {"start": 64.8, "end": 69.64, "text": " Or, we don't talk about the preliminaries to cater to the more seasoned Fellow Scholars"}, {"start": 69.64, "end": 74.8, "text": " out there at the expense of new people who are locked out of the conversation as they"}, {"start": 74.8, "end": 79.16, "text": " may be watching their very first episode of two-minute papers."}, {"start": 79.16, "end": 84.36, "text": " So the goal is clear, I'd like the episodes to be as easily understandable as possible,"}, {"start": 84.36, "end": 90.28, "text": " but while keeping the narrative intact so that every term I use is explained in the episode."}, {"start": 90.28, "end": 95.6, "text": " First, I was thinking about handing out a so-called dictionary in the video description box"}, {"start": 95.6, "end": 98.67999999999999, "text": " where all of these terms would be explained briefly."}, {"start": 98.67999999999999, "end": 103.36, "text": " At first, this sounded like a good idea, but most people knew to the series would likely"}, {"start": 103.36, "end": 108.64, "text": " not know about it, and for them, the fact that these episodes are not self-contained anymore"}, {"start": 108.64, "end": 112.44, "text": " would perhaps be confusing or even worse, repulsive."}, {"start": 112.44, "end": 118.2, "text": " The next idea was that perhaps, instead of re-explaining these terms over and over again,"}, {"start": 118.2, "end": 121.72, "text": " we could add an overlay text in the video for them."}, {"start": 121.72, "end": 126.0, "text": " The more seasoned Fellow Scholars won't be held up because they know what a Lagrangian"}, {"start": 126.0, "end": 131.2, "text": " fluid simulation is, but someone new to the series could also catch up easily just"}, {"start": 131.2, "end": 133.68, "text": " by reading a line of text that pops up."}, {"start": 133.68, "end": 136.4, "text": " I think this one would be a formidable solution."}, {"start": 136.4, "end": 140.0, "text": " I would love to know your opinion on these possible solutions."}, {"start": 140.0, "end": 145.24, "text": " I personally think that the overlay text is the best, but who knows, maybe a better idea"}, {"start": 145.24, "end": 146.24, "text": " gets raised."}, {"start": 146.24, "end": 150.24, "text": " Please make sure to let me know below in the comment section whether you have started"}, {"start": 150.24, "end": 155.48000000000002, "text": " watching two-minute papers recently or maybe you are a seasoned Fellow scholar and how you"}, {"start": 155.48000000000002, "end": 157.24, "text": " feel about the issue."}, {"start": 157.24, "end": 160.56, "text": " Have you ever encountered terms that you didn't understand?"}, {"start": 160.56, "end": 162.12, "text": " Or was it the opposite?"}, {"start": 162.12, "end": 166.12, "text": " Am I beating a dead horse with re-explaining all this simple stuff?"}, {"start": 166.12, "end": 171.4, "text": " I'd like to make these episodes the best I possibly can so that the seasoned Fellow Scholars"}, {"start": 171.4, "end": 176.48000000000002, "text": " and people new to the show alike can marvel at the wonders of research."}, {"start": 176.48000000000002, "end": 181.04, "text": " All feedback is welcome and please make sure to leave a comment so I can better understand"}, {"start": 181.04, "end": 184.16, "text": " how you feel about this issue and what would make you happier."}, {"start": 184.16, "end": 207.07999999999998, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Rdpbnd0pCiI
What is an Autoencoder? | Two Minute Papers #86
Autoencoders are neural networks that are capable of creating sparse representations of the input data and can therefore be used for image compression. There are denoising autoencoders that after learning these sparse representations, can be presented with noisy images. What is even better is a variant that is called the variational autoencoder that not only learns these sparse representations, but can also draw new images as well. We can, for instance, ask it to create new handwritten digits and we can actually expect the results to make sense! _____________________________ The paper "Auto-Encoding Variational Bayes" is available here: http://arxiv.org/pdf/1312.6114.pdf Recommended for you: Recurrent Neural Network Writes Sentences About Images - https://www.youtube.com/watch?v=e-WB4lfg30M Andrej Karpathy's convolutional neural network that you can train in your browser: http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html Sentdex's Youtube channel is available here: https://www.youtube.com/user/sentdex Francois Chollet's blog post on autoencoders: https://blog.keras.io/building-autoencoders-in-keras.html More reading on autoencoders: https://probablydance.com/2016/04/30/neural-networks-are-impressively-good-at-compression/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/ Thumbnail background image source (we have edited the colors and edited it some more): https://pixabay.com/hu/fizet-sz%C3%A1mok-sz%C3%A1mjegyek-kit%C3%B6lt%C3%A9s-937882/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. As we have seen in earlier episodes of the series, neural networks are remarkably efficient tools to solve a number of really difficult problems. The first applications of neural networks usually revolved around classification problems. Classification means that we have an image as an input, and the output is, let's say, a simple decision, whether it depicts a cat or a dog. The input will have as many nodes as there are pixels in the input image, and the output will have two units, and we look at the one of these two that fires the most to decide whether it thinks it is a dog or a cat. Between these two, there are hidden layers where the neural network is asked to build an inner representation of the problem that is efficient at recognizing these animals. So, what is an autoencoder? An autoencoder is an interesting variant with two important changes. First, the number of neurons is the same in the input and the output. Therefore, we can expect that the output is an image that is not only the same size as the input, but actually is the same image. Now, this normally wouldn't make any sense. Why would we want to invent a neural network to do the job of a copying machine? So, here goes the second part. We have a bottleneck in one of these layers. This means that the number of neurons in that layer is much less than we would normally see. Therefore, it has to find a way to represent this kind of data the best it can with a much smaller number of neurons. If you have a smaller budget, you have to let go of all the fluff and concentrate on the bare essentials. Therefore, we can't expect the image to be the same, but they are hopefully quite close. These autoencoders are capable of creating sparse representations of the input data and can therefore be used for image compression. I consciously avoid saying they are useful for image compression. Autoencoders offer no tangible advantage over classical image compression algorithms like JPEG. However, as a crumb of comfort, many different variants exist that are useful for different tasks other than compression. There are denoising autoencoders that after learning these sparse representations can be presented with noisy images. As they more or less know how this kind of data should look like, they can help in denoising these images. That's pretty cool for starters. What is even better is a variant that is called the variational autoencoder that not only learns these sparse representations, but can also draw new images as well. We can, for instance, ask it to create new handwritten digits and we can actually expect the results to make sense. There is an excellent blog post from François Chalet, the creator of the Amazing Carousel Library for building and training neural networks. Make sure to have a look. With these examples, we were really only scratching the surface and I expect quite a few exciting autoencoder applications to pop up in the near future as well. I cannot wait to get my paws on those papers. Hopefully you fellow scholars are also excited. If you are interested in programming, especially in Python, make sure to check out the channel of Centdex for tons of machine learning programming videos and more. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir."}, {"start": 5.0, "end": 14.0, "text": " As we have seen in earlier episodes of the series, neural networks are remarkably efficient tools to solve a number of really difficult problems."}, {"start": 14.0, "end": 19.0, "text": " The first applications of neural networks usually revolved around classification problems."}, {"start": 19.0, "end": 28.0, "text": " Classification means that we have an image as an input, and the output is, let's say, a simple decision, whether it depicts a cat or a dog."}, {"start": 28.0, "end": 41.0, "text": " The input will have as many nodes as there are pixels in the input image, and the output will have two units, and we look at the one of these two that fires the most to decide whether it thinks it is a dog or a cat."}, {"start": 41.0, "end": 51.0, "text": " Between these two, there are hidden layers where the neural network is asked to build an inner representation of the problem that is efficient at recognizing these animals."}, {"start": 51.0, "end": 59.0, "text": " So, what is an autoencoder? An autoencoder is an interesting variant with two important changes."}, {"start": 59.0, "end": 64.0, "text": " First, the number of neurons is the same in the input and the output."}, {"start": 64.0, "end": 72.0, "text": " Therefore, we can expect that the output is an image that is not only the same size as the input, but actually is the same image."}, {"start": 72.0, "end": 80.0, "text": " Now, this normally wouldn't make any sense. Why would we want to invent a neural network to do the job of a copying machine?"}, {"start": 80.0, "end": 84.0, "text": " So, here goes the second part. We have a bottleneck in one of these layers."}, {"start": 84.0, "end": 90.0, "text": " This means that the number of neurons in that layer is much less than we would normally see."}, {"start": 90.0, "end": 98.0, "text": " Therefore, it has to find a way to represent this kind of data the best it can with a much smaller number of neurons."}, {"start": 98.0, "end": 104.0, "text": " If you have a smaller budget, you have to let go of all the fluff and concentrate on the bare essentials."}, {"start": 104.0, "end": 109.0, "text": " Therefore, we can't expect the image to be the same, but they are hopefully quite close."}, {"start": 109.0, "end": 117.0, "text": " These autoencoders are capable of creating sparse representations of the input data and can therefore be used for image compression."}, {"start": 117.0, "end": 122.0, "text": " I consciously avoid saying they are useful for image compression."}, {"start": 122.0, "end": 128.0, "text": " Autoencoders offer no tangible advantage over classical image compression algorithms like JPEG."}, {"start": 128.0, "end": 136.0, "text": " However, as a crumb of comfort, many different variants exist that are useful for different tasks other than compression."}, {"start": 136.0, "end": 143.0, "text": " There are denoising autoencoders that after learning these sparse representations can be presented with noisy images."}, {"start": 143.0, "end": 149.0, "text": " As they more or less know how this kind of data should look like, they can help in denoising these images."}, {"start": 149.0, "end": 156.0, "text": " That's pretty cool for starters. What is even better is a variant that is called the variational autoencoder"}, {"start": 156.0, "end": 162.0, "text": " that not only learns these sparse representations, but can also draw new images as well."}, {"start": 162.0, "end": 169.0, "text": " We can, for instance, ask it to create new handwritten digits and we can actually expect the results to make sense."}, {"start": 169.0, "end": 177.0, "text": " There is an excellent blog post from Fran\u00e7ois Chalet, the creator of the Amazing Carousel Library for building and training neural networks."}, {"start": 177.0, "end": 179.0, "text": " Make sure to have a look."}, {"start": 179.0, "end": 188.0, "text": " With these examples, we were really only scratching the surface and I expect quite a few exciting autoencoder applications to pop up in the near future as well."}, {"start": 188.0, "end": 195.0, "text": " I cannot wait to get my paws on those papers. Hopefully you fellow scholars are also excited."}, {"start": 195.0, "end": 204.0, "text": " If you are interested in programming, especially in Python, make sure to check out the channel of Centdex for tons of machine learning programming videos and more."}, {"start": 204.0, "end": 219.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=gHMY40kEXzs
The Science of Medal Predictions (2016 Rio Olympics Edition) | Two Minute Papers #85
The 2016 Rio Olympic Games is right around the corner, so it is the perfect time to talk a bit about how we can use science to predict the results. Daniel Johnson, a professor of microeconomics at the Colorado College created a simple prediction model, that, over the past 5 Olympic Games, was able to achieve 94% agreement between the predicted and actual medal counts per nation. What is even more amazing is that the model doesn't even take into consideration the athletic abilities of any of these contenders. ____________________ The paper "A Tale of Two Seasons: Participation and Medal Counts at the Summer and Winter Olympic Games" is available here: https://www.researchgate.net/profile/Daniel_Johnson7/publication/4920482_A_Tale_of_Two_Seasons_Participation_and_Medal_Counts_at_the_Summer_and_Winter_Olympic_Games/links/0c9605229d43e35dbf000000.pdf A media article about this on Forbes: http://www.forbes.com/2010/01/19/olympic-medal-predictions-business-sports-medals.html The Olympics subreddit is available here: https://www.reddit.com/r/olympics/ From an earlier episode: Two Minute Papers - Narrow Band Liquid Simulations - https://www.youtube.com/watch?v=nfPBT71xYVQ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The thumbnail background image was created by Rob124 - https://flic.kr/p/efKqca Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Karo Ejona Efehir. The 2016 Rio Olympic Games is right around the corner, so it is the perfect time to talk a bit about how we can use science to predict the results. Before we start, I'd like to mention that we won't be showcasing any predictions for this year's Olympics, instead we are going to talk about a model that was used to predict the results of previous Olympic Games events. So the following very simple question arises, can we predict the future? And the answer is very simple. No, we can't. End of video, thanks for watching. Well jokes aside, we cannot predict the future itself, but we can predict what is likely to happen based on our experience of what happens so far. In mathematics, this is what we call extrapolation. There is also a big difference between trying to extrapolate the results of one athlete or the aggregated number of medals for many athletes, usually an entire nation. To bring up an example about traffic, if we were to predict where one individual car is heading, we would obviously fail most of the time. However, whichever city we live in, we know exactly the hotspots where there are traffic champs every single morning of the year. We cannot accurately predict the behavior of one individual, but if we increase the size of the problem and predict for a group of people, it suddenly gets easier. Going back to the Olympics, will you say in bulk, win the gold on the 100 meters sprint this year? Predicting the results of one athlete is usually hopeless, and we bravely call such endeavors to be mere speculation. The guy whose results we are trying to predict may not even show up this year. As many of you have heard, many of the Russian athletes have been banned from the Olympic games. Our model would sure as hell not be able to predict this. Or would it? We'll see in a second, but hopefully it is easy to see that macro level predictions are much more feasible than predicting on an individual level. In fact, to demonstrate how much of an understatement it is to say feasible, hold on to your seatbelts because Daniel Johnson, a professor of microeconomics at the Colorado College, created a simple prediction model that over the past five Olympic games was able to achieve 94% agreement between the predicted and the actual metal counts per nation. What is even more amazing is that the model doesn't even take into consideration the athletic abilities of any of these contenders. Wow! Media articles report that his model uses only five simple variables, a country's per capita income, population, political structure, climate and host nation advantage. Now I'd first like to mention that GDP per capita means the gross domestic product of one person in a given country. Therefore it is independent of the population of the country. If we sit down and read the paper, which is a great and very easy read and you should definitely have a look, it's in the video description box. So upon reading the paper we realize there are more variables that are subject to scrutiny. For instance, a proximity factor which encodes the distance from the hosting nation. Not only the hosting nation itself, but its neighbors are also enjoying significant advantages in the form of lower transportation costs and being used to the climate of the venue. Unfortunately I haven't found his predictions for this year's Olympics, but based on the simplicity of the model, it should be quite easy to run the predictions provided that the sufficient data is available. The take home message is that usually the bigger the group we are trying to predict results for, the lesser the number of variables that are enough to explain their behavior. If we are talking about the Olympics, five or six variables are enough to faithfully predict nationwide metal counts. These are amazing results that are also a nice testament to the power of mathematics. I also really like how the citation count of the paper gets a big bump every four years. I wonder why. If you are interested in how the Olympic Games unfold, make sure to have a look at the Olympics Reddit I found it to be second to none. As always, the link is available in the description box. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two minute papers with Karo Ejona Efehir."}, {"start": 4.8, "end": 11.120000000000001, "text": " The 2016 Rio Olympic Games is right around the corner, so it is the perfect time to talk"}, {"start": 11.120000000000001, "end": 15.280000000000001, "text": " a bit about how we can use science to predict the results."}, {"start": 15.280000000000001, "end": 19.72, "text": " Before we start, I'd like to mention that we won't be showcasing any predictions for"}, {"start": 19.72, "end": 25.0, "text": " this year's Olympics, instead we are going to talk about a model that was used to predict"}, {"start": 25.0, "end": 28.16, "text": " the results of previous Olympic Games events."}, {"start": 28.16, "end": 33.2, "text": " So the following very simple question arises, can we predict the future?"}, {"start": 33.2, "end": 35.36, "text": " And the answer is very simple."}, {"start": 35.36, "end": 37.120000000000005, "text": " No, we can't."}, {"start": 37.120000000000005, "end": 39.8, "text": " End of video, thanks for watching."}, {"start": 39.8, "end": 44.96, "text": " Well jokes aside, we cannot predict the future itself, but we can predict what is likely"}, {"start": 44.96, "end": 49.6, "text": " to happen based on our experience of what happens so far."}, {"start": 49.6, "end": 53.08, "text": " In mathematics, this is what we call extrapolation."}, {"start": 53.08, "end": 57.8, "text": " There is also a big difference between trying to extrapolate the results of one athlete"}, {"start": 57.8, "end": 63.919999999999995, "text": " or the aggregated number of medals for many athletes, usually an entire nation."}, {"start": 63.919999999999995, "end": 69.39999999999999, "text": " To bring up an example about traffic, if we were to predict where one individual car is"}, {"start": 69.39999999999999, "end": 72.64, "text": " heading, we would obviously fail most of the time."}, {"start": 72.64, "end": 77.72, "text": " However, whichever city we live in, we know exactly the hotspots where there are traffic"}, {"start": 77.72, "end": 80.67999999999999, "text": " champs every single morning of the year."}, {"start": 80.67999999999999, "end": 86.12, "text": " We cannot accurately predict the behavior of one individual, but if we increase the size"}, {"start": 86.12, "end": 91.56, "text": " of the problem and predict for a group of people, it suddenly gets easier."}, {"start": 91.56, "end": 97.28, "text": " Going back to the Olympics, will you say in bulk, win the gold on the 100 meters sprint"}, {"start": 97.28, "end": 98.68, "text": " this year?"}, {"start": 98.68, "end": 104.16, "text": " Predicting the results of one athlete is usually hopeless, and we bravely call such endeavors"}, {"start": 104.16, "end": 106.08000000000001, "text": " to be mere speculation."}, {"start": 106.08000000000001, "end": 110.56, "text": " The guy whose results we are trying to predict may not even show up this year."}, {"start": 110.56, "end": 115.28, "text": " As many of you have heard, many of the Russian athletes have been banned from the Olympic"}, {"start": 115.28, "end": 116.04, "text": " games."}, {"start": 116.04, "end": 119.60000000000001, "text": " Our model would sure as hell not be able to predict this."}, {"start": 119.60000000000001, "end": 120.60000000000001, "text": " Or would it?"}, {"start": 120.60000000000001, "end": 125.68, "text": " We'll see in a second, but hopefully it is easy to see that macro level predictions are"}, {"start": 125.68, "end": 129.56, "text": " much more feasible than predicting on an individual level."}, {"start": 129.56, "end": 136.28, "text": " In fact, to demonstrate how much of an understatement it is to say feasible, hold on to your seatbelts"}, {"start": 136.28, "end": 142.88, "text": " because Daniel Johnson, a professor of microeconomics at the Colorado College, created a simple"}, {"start": 142.88, "end": 150.48, "text": " prediction model that over the past five Olympic games was able to achieve 94% agreement"}, {"start": 150.48, "end": 154.48, "text": " between the predicted and the actual metal counts per nation."}, {"start": 154.48, "end": 159.51999999999998, "text": " What is even more amazing is that the model doesn't even take into consideration the"}, {"start": 159.51999999999998, "end": 163.24, "text": " athletic abilities of any of these contenders."}, {"start": 163.24, "end": 164.72, "text": " Wow!"}, {"start": 164.72, "end": 170.88, "text": " Media articles report that his model uses only five simple variables, a country's per capita"}, {"start": 170.88, "end": 177.72, "text": " income, population, political structure, climate and host nation advantage."}, {"start": 177.72, "end": 183.64, "text": " Now I'd first like to mention that GDP per capita means the gross domestic product of one"}, {"start": 183.64, "end": 185.84, "text": " person in a given country."}, {"start": 185.84, "end": 189.6, "text": " Therefore it is independent of the population of the country."}, {"start": 189.6, "end": 194.35999999999999, "text": " If we sit down and read the paper, which is a great and very easy read and you should"}, {"start": 194.35999999999999, "end": 197.88, "text": " definitely have a look, it's in the video description box."}, {"start": 197.88, "end": 203.84, "text": " So upon reading the paper we realize there are more variables that are subject to scrutiny."}, {"start": 203.84, "end": 209.44, "text": " For instance, a proximity factor which encodes the distance from the hosting nation."}, {"start": 209.44, "end": 215.44, "text": " Not only the hosting nation itself, but its neighbors are also enjoying significant advantages"}, {"start": 215.44, "end": 221.0, "text": " in the form of lower transportation costs and being used to the climate of the venue."}, {"start": 221.0, "end": 225.72, "text": " Unfortunately I haven't found his predictions for this year's Olympics, but based on the"}, {"start": 225.72, "end": 230.68, "text": " simplicity of the model, it should be quite easy to run the predictions provided that the"}, {"start": 230.68, "end": 232.64, "text": " sufficient data is available."}, {"start": 232.64, "end": 237.28, "text": " The take home message is that usually the bigger the group we are trying to predict results"}, {"start": 237.28, "end": 242.44, "text": " for, the lesser the number of variables that are enough to explain their behavior."}, {"start": 242.44, "end": 247.64, "text": " If we are talking about the Olympics, five or six variables are enough to faithfully"}, {"start": 247.64, "end": 250.44, "text": " predict nationwide metal counts."}, {"start": 250.44, "end": 256.24, "text": " These are amazing results that are also a nice testament to the power of mathematics."}, {"start": 256.24, "end": 261.84, "text": " I also really like how the citation count of the paper gets a big bump every four years."}, {"start": 261.84, "end": 264.44, "text": " I wonder why."}, {"start": 264.44, "end": 268.56, "text": " If you are interested in how the Olympic Games unfold, make sure to have a look at the"}, {"start": 268.56, "end": 271.88, "text": " Olympics Reddit I found it to be second to none."}, {"start": 271.88, "end": 275.12, "text": " As always, the link is available in the description box."}, {"start": 275.12, "end": 281.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=a1z6GXj8QK8
Peer Review and the NeurIPS Experiment | Two Minute Papers #84
What is peer review and how is it done? How can we check the validity of a paper? And more importantly, how can we be sure that the peer review process is fair and consistent? We'll talk about these things and how the NIPS experiment addresses them. ____________________ The NIPS experiment: http://blog.mrtz.org/2014/12/15/the-nips-experiment.html http://www.kdnuggets.com/2016/05/embrace-random-acceptance-borderline-papers.html The showcased earlier episode video: Artistic Manipulation of Caustics - https://www.youtube.com/watch?v=K-0KJtk07YU A New Publishing Model in Computer Science by Yann LeCun: http://yann.lecun.com/ex/pamphlets/publishing-models.html WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ The thumbnail background image was created by Quinn Dombrowski (the color of the pen was changed) - https://flic.kr/p/8HRJoc The blind man icon was created by Scott de Jonge - http://www.flaticon.com/free-icon/blind-man-silhouette_8711 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. We are here to answer a simple question. What is peer review? Well, in science, making sure that the validity of published results is beyond doubt, is of utmost importance. To this end, many scientific journals and conferences exist, where researchers can submit their findings in the form of a science paper. As a condition of acceptance, these papers shall undergo extensive scrutiny by typically two to five other scientists. This referring process we call peer review. Single-blind reviewing means that the names of the reviewers are shrouded in mystery, but the authors of the paper are known to them. In double-blind reviews, however, the papers are anonymized and none of the parties know the names of each other. These different kinds of blind reviews were made to eliminate possible people-related biases. There's a lot of discussion whether they do a good job at that or not, but this is what they are for. After the review, if the results are found to be correct and the reviews are favorable enough, the paper is accepted and subsequently published in a journal and slash or presented at a conference. Usually the higher the prestige of the publication venue is, the higher the likelihood of rejection, which inevitably raises a big question. How to choose the papers that are to be accepted? As we are scientists, we have to try to ensure that the peer review is a fair and consistent process. To measure if this is the case, the NIPS experiment was born. NIPS is one of the highest quality conferences and machine learning with a remarkably low acceptance ratio, which typically hovers below 25%. This is indeed remarkably low considering the fact that many of the best research groups in the world submit their finest works here. So here's the astute idea behind the NIPS experiment. A large amount of papers would be secretly disseminated to multiple committees, they would review it without knowing about each other, and we would have a look whether they would accept or reject the same papers. Re-reviewing papers and see if the results are the same, if you will. At a given prescribed acceptance ratio, there was a disagreement for 57% of the papers. This means that one of the committees would accept the paper and the other wouldn't and vice versa. Now to put this number into perspective, the mathematical model of a random committee was put together. This means that the members of this committee have no idea what they are doing, and as a review, they basically toss up a coin and accept or reject the paper based on the result. The calculations conclude that this random committee would have this disagreement ratio of about 77%. This is hardly something to be proud of. The consistency of expert reviewers is significantly closer to a coin flip than to a hypothetical perfect review process. So experts, 57% disagreement, coin flip committee, 77% disagreement. It is not as bad as the coin flip committee, so the question naturally arises, where are the differences? Well it seems that the top 10% of the papers are clearly accepted by both committees, the bottom 25% of the papers are clearly rejected. This is the good news. And the bad news is that anything in between might as well be decided with a coin toss. If the consistency of peer review is subject to maximization, we clearly have to do something different. Huge respect for the NIPs organizers for doing this laborious experiment, for the reviewers who did a ton of extra work, and kudos for the fact that the organizers were willing to release such uncomfortable results. This is very important and is the only way of improving our processes. Hopefully someday we shall have our revenge over the coin flip committee. Can we do something about this? What is a possible solution? Well, of course this is a large and difficult problem for which I don't pretend to have any perfect solutions. But there is a really interesting idea by a renowned professor about crowdsourcing reviews that I found to be spectacular. I leave the blog post in the comment section both for this and the NIPs experiment and we shall have an entire episode about this soon. Stay tuned. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.24, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 5.24, "end": 8.16, "text": " We are here to answer a simple question."}, {"start": 8.16, "end": 9.64, "text": " What is peer review?"}, {"start": 9.64, "end": 15.84, "text": " Well, in science, making sure that the validity of published results is beyond doubt, is of utmost"}, {"start": 15.84, "end": 17.080000000000002, "text": " importance."}, {"start": 17.080000000000002, "end": 23.44, "text": " To this end, many scientific journals and conferences exist, where researchers can submit their findings"}, {"start": 23.44, "end": 25.64, "text": " in the form of a science paper."}, {"start": 25.64, "end": 30.8, "text": " As a condition of acceptance, these papers shall undergo extensive scrutiny by typically"}, {"start": 30.8, "end": 34.3, "text": " two to five other scientists."}, {"start": 34.3, "end": 38.04, "text": " This referring process we call peer review."}, {"start": 38.04, "end": 42.68, "text": " Single-blind reviewing means that the names of the reviewers are shrouded in mystery, but"}, {"start": 42.68, "end": 45.56, "text": " the authors of the paper are known to them."}, {"start": 45.56, "end": 50.6, "text": " In double-blind reviews, however, the papers are anonymized and none of the parties know"}, {"start": 50.6, "end": 52.32, "text": " the names of each other."}, {"start": 52.32, "end": 56.88, "text": " These different kinds of blind reviews were made to eliminate possible people-related"}, {"start": 56.88, "end": 57.88, "text": " biases."}, {"start": 57.88, "end": 62.16, "text": " There's a lot of discussion whether they do a good job at that or not, but this is what"}, {"start": 62.16, "end": 63.52, "text": " they are for."}, {"start": 63.52, "end": 68.0, "text": " After the review, if the results are found to be correct and the reviews are favorable"}, {"start": 68.0, "end": 74.48, "text": " enough, the paper is accepted and subsequently published in a journal and slash or presented"}, {"start": 74.48, "end": 76.28, "text": " at a conference."}, {"start": 76.28, "end": 82.12, "text": " Usually the higher the prestige of the publication venue is, the higher the likelihood of rejection,"}, {"start": 82.12, "end": 84.60000000000001, "text": " which inevitably raises a big question."}, {"start": 84.60000000000001, "end": 87.44, "text": " How to choose the papers that are to be accepted?"}, {"start": 87.44, "end": 92.68, "text": " As we are scientists, we have to try to ensure that the peer review is a fair and consistent"}, {"start": 92.68, "end": 93.68, "text": " process."}, {"start": 93.68, "end": 97.68, "text": " To measure if this is the case, the NIPS experiment was born."}, {"start": 97.68, "end": 103.36000000000001, "text": " NIPS is one of the highest quality conferences and machine learning with a remarkably low acceptance"}, {"start": 103.36000000000001, "end": 107.2, "text": " ratio, which typically hovers below 25%."}, {"start": 107.2, "end": 112.24000000000001, "text": " This is indeed remarkably low considering the fact that many of the best research groups"}, {"start": 112.24000000000001, "end": 115.28, "text": " in the world submit their finest works here."}, {"start": 115.28, "end": 118.52000000000001, "text": " So here's the astute idea behind the NIPS experiment."}, {"start": 118.52000000000001, "end": 124.08, "text": " A large amount of papers would be secretly disseminated to multiple committees, they would"}, {"start": 124.08, "end": 128.56, "text": " review it without knowing about each other, and we would have a look whether they would"}, {"start": 128.56, "end": 131.88, "text": " accept or reject the same papers."}, {"start": 131.88, "end": 136.32, "text": " Re-reviewing papers and see if the results are the same, if you will."}, {"start": 136.32, "end": 143.56, "text": " At a given prescribed acceptance ratio, there was a disagreement for 57% of the papers."}, {"start": 143.56, "end": 148.0, "text": " This means that one of the committees would accept the paper and the other wouldn't and"}, {"start": 148.0, "end": 149.32, "text": " vice versa."}, {"start": 149.32, "end": 154.4, "text": " Now to put this number into perspective, the mathematical model of a random committee"}, {"start": 154.4, "end": 155.72, "text": " was put together."}, {"start": 155.72, "end": 160.35999999999999, "text": " This means that the members of this committee have no idea what they are doing, and as a"}, {"start": 160.35999999999999, "end": 166.28, "text": " review, they basically toss up a coin and accept or reject the paper based on the result."}, {"start": 166.28, "end": 171.64000000000001, "text": " The calculations conclude that this random committee would have this disagreement ratio"}, {"start": 171.64000000000001, "end": 174.32, "text": " of about 77%."}, {"start": 174.32, "end": 176.68, "text": " This is hardly something to be proud of."}, {"start": 176.68, "end": 183.24, "text": " The consistency of expert reviewers is significantly closer to a coin flip than to a hypothetical"}, {"start": 183.24, "end": 185.52, "text": " perfect review process."}, {"start": 185.52, "end": 192.92000000000002, "text": " So experts, 57% disagreement, coin flip committee, 77% disagreement."}, {"start": 192.92, "end": 198.35999999999999, "text": " It is not as bad as the coin flip committee, so the question naturally arises, where are"}, {"start": 198.35999999999999, "end": 199.92, "text": " the differences?"}, {"start": 199.92, "end": 205.88, "text": " Well it seems that the top 10% of the papers are clearly accepted by both committees, the"}, {"start": 205.88, "end": 209.95999999999998, "text": " bottom 25% of the papers are clearly rejected."}, {"start": 209.95999999999998, "end": 211.56, "text": " This is the good news."}, {"start": 211.56, "end": 217.6, "text": " And the bad news is that anything in between might as well be decided with a coin toss."}, {"start": 217.6, "end": 222.67999999999998, "text": " If the consistency of peer review is subject to maximization, we clearly have to do something"}, {"start": 222.68, "end": 223.68, "text": " different."}, {"start": 223.68, "end": 229.16, "text": " Huge respect for the NIPs organizers for doing this laborious experiment, for the reviewers"}, {"start": 229.16, "end": 234.56, "text": " who did a ton of extra work, and kudos for the fact that the organizers were willing to"}, {"start": 234.56, "end": 237.4, "text": " release such uncomfortable results."}, {"start": 237.4, "end": 242.08, "text": " This is very important and is the only way of improving our processes."}, {"start": 242.08, "end": 246.52, "text": " Hopefully someday we shall have our revenge over the coin flip committee."}, {"start": 246.52, "end": 248.60000000000002, "text": " Can we do something about this?"}, {"start": 248.60000000000002, "end": 250.48000000000002, "text": " What is a possible solution?"}, {"start": 250.48, "end": 256.08, "text": " Well, of course this is a large and difficult problem for which I don't pretend to have"}, {"start": 256.08, "end": 257.76, "text": " any perfect solutions."}, {"start": 257.76, "end": 263.36, "text": " But there is a really interesting idea by a renowned professor about crowdsourcing reviews"}, {"start": 263.36, "end": 265.64, "text": " that I found to be spectacular."}, {"start": 265.64, "end": 270.44, "text": " I leave the blog post in the comment section both for this and the NIPs experiment and"}, {"start": 270.44, "end": 273.4, "text": " we shall have an entire episode about this soon."}, {"start": 273.4, "end": 274.4, "text": " Stay tuned."}, {"start": 274.4, "end": 283.4, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZHoNpxUHewQ
Task-based Animation of Virtual Characters | Two Minute Papers #83
This piece of work is about synthesizing believable footstep animations for virtual characters. The paper "Task-based Locomotion" is available here: http://www.cs.ubc.ca/~van/papers/2016-TOG-taskBasedLocomotion/index.html WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. In this piece of work, you'll see wondrous little sequences of animations where a virtual character is asked to write on a whiteboard, move boxes, and perform different kinds of sitting behaviors. The emphasis is on synthesizing believable footstep patterns for this character. This sounds a bit mundane, but will quickly realize that the combination and blending of different footstep styles is absolutely essential for a realistic animation of these tasks. Beyond simple locomotion, walking, if you will, for instance, side-stepping, using toe and heel pivots, or partial turns and steps every now and then are essential in obtaining a proper posture for a number of different tasks. A rich vocabulary of these movement types and proper transitions between them lead to really amazing animation sequences that you can see in the video. For instance, one of the most heenest examples of the lack of animating proper locomotion we can witness in older computer games, and sometimes even today, is when a character is writing on a whiteboard who suddenly runs out of space, turns away from it, walks a bit, turns back towards the whiteboard, and continues writing there. Even if we have impeccable looking photorealistically rendered characters, such robotic behaviors really ruin the immersion. In reality, a simple side-stepping will do the job, and this is exactly what the algorithm tells the character to perform. Very simple and smooth. This technique works by decomposing a given task to several sub-tasks, like starting to sit on a box or getting up and choosing the appropriate footsteps types and transitions for them. One can also mark different tasks as being low or high effort that are marked with green and blue. A low effort task could mean fixing a minor error on the whiteboard nearby without moving there, and a high effort task that we see marked with blue would be continuing our writing on a different part of the whiteboard. For these tasks, the footsteps are planned accordingly. Really cool. This piece of work is a fine example of the depth and complexity of computer graphics and animation research, and how even the slightest failure in capturing fine-scale details is enough to break the immersion of reality. It is also really amazing that we have so many people who are interested in watching these videos about research, and quite a few of you decided to also support us on Patreon. I feel really privileged to have such amazing supporters like you fellow scholars. As always, I kindly thank you for this at the end of these videos, so here goes. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.38, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.38, "end": 12.56, "text": " In this piece of work, you'll see wondrous little sequences of animations where a virtual character is asked to write on a whiteboard,"}, {"start": 12.56, "end": 16.8, "text": " move boxes, and perform different kinds of sitting behaviors."}, {"start": 16.8, "end": 21.7, "text": " The emphasis is on synthesizing believable footstep patterns for this character."}, {"start": 21.7, "end": 32.519999999999996, "text": " This sounds a bit mundane, but will quickly realize that the combination and blending of different footstep styles is absolutely essential for a realistic animation of these tasks."}, {"start": 32.519999999999996, "end": 47.64, "text": " Beyond simple locomotion, walking, if you will, for instance, side-stepping, using toe and heel pivots, or partial turns and steps every now and then are essential in obtaining a proper posture for a number of different tasks."}, {"start": 47.64, "end": 56.44, "text": " A rich vocabulary of these movement types and proper transitions between them lead to really amazing animation sequences that you can see in the video."}, {"start": 56.44, "end": 66.3, "text": " For instance, one of the most heenest examples of the lack of animating proper locomotion we can witness in older computer games, and sometimes even today,"}, {"start": 66.3, "end": 78.06, "text": " is when a character is writing on a whiteboard who suddenly runs out of space, turns away from it, walks a bit, turns back towards the whiteboard, and continues writing there."}, {"start": 78.06, "end": 85.96, "text": " Even if we have impeccable looking photorealistically rendered characters, such robotic behaviors really ruin the immersion."}, {"start": 85.96, "end": 93.4, "text": " In reality, a simple side-stepping will do the job, and this is exactly what the algorithm tells the character to perform."}, {"start": 93.4, "end": 95.36, "text": " Very simple and smooth."}, {"start": 95.36, "end": 107.14, "text": " This technique works by decomposing a given task to several sub-tasks, like starting to sit on a box or getting up and choosing the appropriate footsteps types and transitions for them."}, {"start": 107.14, "end": 113.94, "text": " One can also mark different tasks as being low or high effort that are marked with green and blue."}, {"start": 113.94, "end": 120.18, "text": " A low effort task could mean fixing a minor error on the whiteboard nearby without moving there,"}, {"start": 120.18, "end": 127.5, "text": " and a high effort task that we see marked with blue would be continuing our writing on a different part of the whiteboard."}, {"start": 127.5, "end": 131.1, "text": " For these tasks, the footsteps are planned accordingly."}, {"start": 131.1, "end": 132.34, "text": " Really cool."}, {"start": 132.34, "end": 139.14000000000001, "text": " This piece of work is a fine example of the depth and complexity of computer graphics and animation research,"}, {"start": 139.14000000000001, "end": 146.14000000000001, "text": " and how even the slightest failure in capturing fine-scale details is enough to break the immersion of reality."}, {"start": 146.14, "end": 152.5, "text": " It is also really amazing that we have so many people who are interested in watching these videos about research,"}, {"start": 152.5, "end": 156.26, "text": " and quite a few of you decided to also support us on Patreon."}, {"start": 156.26, "end": 161.57999999999998, "text": " I feel really privileged to have such amazing supporters like you fellow scholars."}, {"start": 161.57999999999998, "end": 166.57999999999998, "text": " As always, I kindly thank you for this at the end of these videos, so here goes."}, {"start": 166.58, "end": 178.98000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1ypV5ZiIbdA
What is Optimization? + Learning Gradient Descent | Two Minute Papers #82
Let's talk about what mathematical optimization is, how gradient descent can solve simpler optimization problems, and Google DeepMind's proposed algorithm that automatically learn optimization algorithms. The paper "Learning to learn by gradient descent by gradient descent" is available here: http://arxiv.org/pdf/1606.04474v1.pdf Source code: https://github.com/deepmind/learning-to-learn ______________________________ Recommended for you: Gradients, Poisson's Equation and Light Transport - https://www.youtube.com/watch?v=sSnDTPjfBYU WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz The chihuahua vs muffin image is a courtesy of teenybiscuit - https://twitter.com/teenybiscuit More fun stuff here: http://twistedsifter.com/2016/03/puppy-or-bagel-meme-gallery/ The thumbnail background image was created by Alan Levine - https://flic.kr/p/vbEd1W Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. Today, we're not going to have the usual visual fireworks that we had with most topics in computer graphics, but I really hope you'll still find this episode enjoyable and stimulating. This episode is also going to be a bit heavy on what optimization is and will talk a little bit at the end about the intuition of the paper itself. We are going to talk about mathematical optimization. This term is not to be confused with the word optimization that we use in our everyday lives for, for instance, improving the efficiency of a computer code or a workflow. This kind of optimization means finding one hopefully optimal solution from a set of possible candidate solutions. An optimization problem is given the following way. One, there's a set of variables we can play with, and two, there's an objective function that we wish to minimize or maximize. Well, this probably sounds great for mathematicians, but for everyone else, maybe this is a bit confusing. Let's build a better understanding of this concept through an example. For instance, let's imagine that we have to cook a meal for our friends from a given set of ingredients. The question is how much salt vegetables and meat goes into the pan. These are variables that we can play with, and the goal is to choose the optimal amount of these ingredients to maximize the tastiness of the meal. Tastiness will be our objective function, and for a moment we shall pretend that tastiness is an objective measure of a meal. This was just one toy example, but the list of applications is endless. In fact, optimization is so incredibly ubiquitous. There is hardly any field of science where some form of it is not used to solve difficult problems. For instance, if we have the plan of a bridge, we can ask it to tell us the minimal amount of building materials we need to build it in a way that it remains stable. We can also optimize the layout of the bridge itself to make sure the inner tension and compression forces line up well. A big part of deep learning is actually also an optimization problem. There are a given set of neurons, and the variables are when they should be activated. And we are fiddling with these variables to minimize the output error, which can be, for instance, our accuracy in guessing whether a picture depicts a muffin or a chiwawa. The question for almost any problem is usually not whether it can be formulated as an optimization problem, but whether it is worth it. And by worth it, I mean the question whether we can solve it quickly and reliably. An optimizer is a technique that is able to solve these optimization problems and offer us a hopefully satisfactory solution to them. There are many algorithms that excel at solving problems of different complexities, but what ties them together is that they are usually handcrafted techniques written by really smart mathematicians. Gradient descent is one of the simplest optimization algorithms where we change each of the variables around a bit and as a result, see if the objective function changes favorably. After finding a direction that leads to the most favorable changes, we shall continue our journey in that direction. What does this mean in practice? Intuitively, in our cooking example, after making several meals, we would ask our guests about the tastiness of these meals. From their responses, we would recognize that adding a bit more salt led to very favorable results and since these people are notorious meat eaters, decreasing the amount of vegetables and increasing the meat content also led to favorable reviews. And we, of course, on the back of this newfound knowledge, will cook more with these variable changes in pursuit of the best possible meal in the history of mankind. This is something that is reasonably close to what gradient descent is in mathematics. A slightly more sophisticated version of gradient descent is also a very popular way of training neural networks. If you have any questions regarding the gradient part, we had an extended two-minute papers episode on what gradients are and how to use them to build an awesome algorithm for light transport. It is available, where? Well, of course, in the video description box, Karoy. Why are you even asking? So what about the paper part? This incredible new work of Google DeepMind shows that an optimization algorithm itself can emerge as a result of learning. An algorithm itself is not considered the same one thing as deciding what an image depicts or how we should grade a student essay. It is an algorithm, a sequence of steps we have to take. If we are talking about output sequences, we'll definitely need to use a recurrent neural network for that. Their proposed learning algorithm can create new optimization techniques that outperform previously existing methods, not everywhere, but on a set of specialized problems. I hope you've enjoyed the journey. We'll talk quite a bit about optimization in the future. You'll love it. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.8, "end": 9.94, "text": " Today, we're not going to have the usual visual fireworks that we had with most topics in"}, {"start": 9.94, "end": 15.8, "text": " computer graphics, but I really hope you'll still find this episode enjoyable and stimulating."}, {"start": 15.8, "end": 20.92, "text": " This episode is also going to be a bit heavy on what optimization is and will talk a little"}, {"start": 20.92, "end": 25.04, "text": " bit at the end about the intuition of the paper itself."}, {"start": 25.04, "end": 28.76, "text": " We are going to talk about mathematical optimization."}, {"start": 28.76, "end": 34.160000000000004, "text": " This term is not to be confused with the word optimization that we use in our everyday"}, {"start": 34.160000000000004, "end": 40.400000000000006, "text": " lives for, for instance, improving the efficiency of a computer code or a workflow."}, {"start": 40.400000000000006, "end": 46.2, "text": " This kind of optimization means finding one hopefully optimal solution from a set of possible"}, {"start": 46.2, "end": 47.88, "text": " candidate solutions."}, {"start": 47.88, "end": 51.36, "text": " An optimization problem is given the following way."}, {"start": 51.36, "end": 56.400000000000006, "text": " One, there's a set of variables we can play with, and two, there's an objective function"}, {"start": 56.4, "end": 59.519999999999996, "text": " that we wish to minimize or maximize."}, {"start": 59.519999999999996, "end": 65.36, "text": " Well, this probably sounds great for mathematicians, but for everyone else, maybe this is a bit"}, {"start": 65.36, "end": 66.36, "text": " confusing."}, {"start": 66.36, "end": 70.36, "text": " Let's build a better understanding of this concept through an example."}, {"start": 70.36, "end": 75.16, "text": " For instance, let's imagine that we have to cook a meal for our friends from a given"}, {"start": 75.16, "end": 76.64, "text": " set of ingredients."}, {"start": 76.64, "end": 82.44, "text": " The question is how much salt vegetables and meat goes into the pan."}, {"start": 82.44, "end": 87.48, "text": " These are variables that we can play with, and the goal is to choose the optimal amount"}, {"start": 87.48, "end": 92.32, "text": " of these ingredients to maximize the tastiness of the meal."}, {"start": 92.32, "end": 97.48, "text": " Tastiness will be our objective function, and for a moment we shall pretend that tastiness"}, {"start": 97.48, "end": 99.8, "text": " is an objective measure of a meal."}, {"start": 99.8, "end": 104.52, "text": " This was just one toy example, but the list of applications is endless."}, {"start": 104.52, "end": 108.44, "text": " In fact, optimization is so incredibly ubiquitous."}, {"start": 108.44, "end": 113.75999999999999, "text": " There is hardly any field of science where some form of it is not used to solve difficult"}, {"start": 113.75999999999999, "end": 114.75999999999999, "text": " problems."}, {"start": 114.75999999999999, "end": 120.28, "text": " For instance, if we have the plan of a bridge, we can ask it to tell us the minimal amount"}, {"start": 120.28, "end": 125.03999999999999, "text": " of building materials we need to build it in a way that it remains stable."}, {"start": 125.03999999999999, "end": 130.44, "text": " We can also optimize the layout of the bridge itself to make sure the inner tension and"}, {"start": 130.44, "end": 132.68, "text": " compression forces line up well."}, {"start": 132.68, "end": 137.36, "text": " A big part of deep learning is actually also an optimization problem."}, {"start": 137.36, "end": 142.68, "text": " There are a given set of neurons, and the variables are when they should be activated."}, {"start": 142.68, "end": 147.44000000000003, "text": " And we are fiddling with these variables to minimize the output error, which can be,"}, {"start": 147.44000000000003, "end": 154.72000000000003, "text": " for instance, our accuracy in guessing whether a picture depicts a muffin or a chiwawa."}, {"start": 154.72000000000003, "end": 160.92000000000002, "text": " The question for almost any problem is usually not whether it can be formulated as an optimization"}, {"start": 160.92000000000002, "end": 163.72000000000003, "text": " problem, but whether it is worth it."}, {"start": 163.72, "end": 168.48, "text": " And by worth it, I mean the question whether we can solve it quickly and reliably."}, {"start": 168.48, "end": 173.76, "text": " An optimizer is a technique that is able to solve these optimization problems and offer"}, {"start": 173.76, "end": 177.04, "text": " us a hopefully satisfactory solution to them."}, {"start": 177.04, "end": 181.8, "text": " There are many algorithms that excel at solving problems of different complexities, but what"}, {"start": 181.8, "end": 187.16, "text": " ties them together is that they are usually handcrafted techniques written by really smart"}, {"start": 187.16, "end": 188.64, "text": " mathematicians."}, {"start": 188.64, "end": 193.48, "text": " Gradient descent is one of the simplest optimization algorithms where we change each of the"}, {"start": 193.48, "end": 199.56, "text": " variables around a bit and as a result, see if the objective function changes favorably."}, {"start": 199.56, "end": 204.04, "text": " After finding a direction that leads to the most favorable changes, we shall continue"}, {"start": 204.04, "end": 206.16, "text": " our journey in that direction."}, {"start": 206.16, "end": 208.12, "text": " What does this mean in practice?"}, {"start": 208.12, "end": 213.83999999999997, "text": " Intuitively, in our cooking example, after making several meals, we would ask our guests"}, {"start": 213.83999999999997, "end": 216.76, "text": " about the tastiness of these meals."}, {"start": 216.76, "end": 222.07999999999998, "text": " From their responses, we would recognize that adding a bit more salt led to very favorable"}, {"start": 222.08, "end": 228.12, "text": " results and since these people are notorious meat eaters, decreasing the amount of vegetables"}, {"start": 228.12, "end": 232.60000000000002, "text": " and increasing the meat content also led to favorable reviews."}, {"start": 232.60000000000002, "end": 237.76000000000002, "text": " And we, of course, on the back of this newfound knowledge, will cook more with these variable"}, {"start": 237.76000000000002, "end": 243.36, "text": " changes in pursuit of the best possible meal in the history of mankind."}, {"start": 243.36, "end": 247.72000000000003, "text": " This is something that is reasonably close to what gradient descent is in mathematics."}, {"start": 247.72, "end": 253.35999999999999, "text": " A slightly more sophisticated version of gradient descent is also a very popular way of training"}, {"start": 253.35999999999999, "end": 254.76, "text": " neural networks."}, {"start": 254.76, "end": 259.76, "text": " If you have any questions regarding the gradient part, we had an extended two-minute papers"}, {"start": 259.76, "end": 265.0, "text": " episode on what gradients are and how to use them to build an awesome algorithm for light"}, {"start": 265.0, "end": 266.0, "text": " transport."}, {"start": 266.0, "end": 268.28, "text": " It is available, where?"}, {"start": 268.28, "end": 272.04, "text": " Well, of course, in the video description box, Karoy."}, {"start": 272.04, "end": 273.96, "text": " Why are you even asking?"}, {"start": 273.96, "end": 279.23999999999995, "text": " So what about the paper part? This incredible new work of Google DeepMind shows that an"}, {"start": 279.23999999999995, "end": 284.2, "text": " optimization algorithm itself can emerge as a result of learning."}, {"start": 284.2, "end": 290.15999999999997, "text": " An algorithm itself is not considered the same one thing as deciding what an image depicts"}, {"start": 290.15999999999997, "end": 292.79999999999995, "text": " or how we should grade a student essay."}, {"start": 292.79999999999995, "end": 296.64, "text": " It is an algorithm, a sequence of steps we have to take."}, {"start": 296.64, "end": 301.35999999999996, "text": " If we are talking about output sequences, we'll definitely need to use a recurrent neural"}, {"start": 301.35999999999996, "end": 302.59999999999997, "text": " network for that."}, {"start": 302.6, "end": 307.96000000000004, "text": " Their proposed learning algorithm can create new optimization techniques that outperform"}, {"start": 307.96000000000004, "end": 313.24, "text": " previously existing methods, not everywhere, but on a set of specialized problems."}, {"start": 313.24, "end": 314.96000000000004, "text": " I hope you've enjoyed the journey."}, {"start": 314.96000000000004, "end": 317.96000000000004, "text": " We'll talk quite a bit about optimization in the future."}, {"start": 317.96000000000004, "end": 318.96000000000004, "text": " You'll love it."}, {"start": 318.96000000000004, "end": 321.56, "text": " Thanks for watching and for your generous support."}, {"start": 321.56, "end": 338.72, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=zLzhsyeAie4
Bundlefusion: 3D Scenes from 2D Videos | Two Minute Papers #81
This piece of work enables us to walk around in a room with a camera, and create a complete 3D computer model from the video footage. Note that the title says "2D", but since RGB-D cameras are relatively new, they are both referred to as 2D and 3D (I've heard 2.5D as well before). We went with the 2D for now and I hope it won't raise any confusion! :) ____________________________ The paper "BundleFusion: Real-time Globally Consistent 3D Reconstruction using Online Surface Re-integration" is available here: http://graphics.stanford.edu/projects/bundlefusion/ WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton. https://www.patreon.com/TwoMinutePapers We also thank Experiment for sponsoring our series. - https://experiment.com/ The thumbnail background image was created by gregzaal - http://www.blendswap.com/blends/view/74382 Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Kato Yjona Yfahir. This piece of work enables us to walk around in a room with a camera and create a complete 3D computer model from the video footage. The technique has a really cool effect where the 3D model is continuously refined as we obtain more and more data by walking around with our camera. This is a very difficult problem and a good solution to this offers a set of cool potential applications. If we have a 3D model of a scene, what can we do with it? Well, of course, assign different materials to them and run a light simulation program for architecture of visualization applications, animation movies, and so on. We can also easily scan a lot of different furnitures and create a useful database out of them. There are tons of more applications, but I think this should do for starters. Normally, if one has to create a 3D model of a room or a building, the bottom line is that it requires several days or weeks of labor. Fortunately, with this technique, we'll obtain a 3D model in real time and we won't have to go through these tribulations. However, I'd like to note that the models are still, by far, not perfect. If we are interested in the many small intricate details, we have to add them back by hand. Previous methods were able to achieve similar results, but they suffer from a number of different drawbacks. For instance, most of them don't support traditional consumer cameras or take minutes to hours to perform the reconstruction. To produce the results presented in the paper, an Nvidia Titan X video card was used, which is currently one of the pricier pieces of equipment for consumers, but not so much for companies who are typically interested in these applications. If we take into consideration the rate at which graphical hardware is improving, anyone will be able to run this at home in real time in a few years time. The comparisons to previous works reveal that this technique is not only real time, but the quality of the results is mostly comparable, and in some cases, it surpasses previous methods. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.96, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Kato Yjona Yfahir."}, {"start": 4.96, "end": 10.22, "text": " This piece of work enables us to walk around in a room with a camera and create a complete"}, {"start": 10.22, "end": 13.42, "text": " 3D computer model from the video footage."}, {"start": 13.42, "end": 18.86, "text": " The technique has a really cool effect where the 3D model is continuously refined as we"}, {"start": 18.86, "end": 22.7, "text": " obtain more and more data by walking around with our camera."}, {"start": 22.7, "end": 27.82, "text": " This is a very difficult problem and a good solution to this offers a set of cool potential"}, {"start": 27.82, "end": 29.02, "text": " applications."}, {"start": 29.02, "end": 32.7, "text": " If we have a 3D model of a scene, what can we do with it?"}, {"start": 32.7, "end": 38.06, "text": " Well, of course, assign different materials to them and run a light simulation program"}, {"start": 38.06, "end": 42.9, "text": " for architecture of visualization applications, animation movies, and so on."}, {"start": 42.9, "end": 48.18, "text": " We can also easily scan a lot of different furnitures and create a useful database out"}, {"start": 48.18, "end": 49.18, "text": " of them."}, {"start": 49.18, "end": 53.019999999999996, "text": " There are tons of more applications, but I think this should do for starters."}, {"start": 53.019999999999996, "end": 58.42, "text": " Normally, if one has to create a 3D model of a room or a building, the bottom line is"}, {"start": 58.42, "end": 62.14, "text": " that it requires several days or weeks of labor."}, {"start": 62.14, "end": 67.14, "text": " Fortunately, with this technique, we'll obtain a 3D model in real time and we won't have"}, {"start": 67.14, "end": 69.22, "text": " to go through these tribulations."}, {"start": 69.22, "end": 74.26, "text": " However, I'd like to note that the models are still, by far, not perfect."}, {"start": 74.26, "end": 80.14, "text": " If we are interested in the many small intricate details, we have to add them back by hand."}, {"start": 80.14, "end": 84.9, "text": " Previous methods were able to achieve similar results, but they suffer from a number of different"}, {"start": 84.9, "end": 85.9, "text": " drawbacks."}, {"start": 85.9, "end": 91.5, "text": " For instance, most of them don't support traditional consumer cameras or take minutes to hours"}, {"start": 91.5, "end": 93.30000000000001, "text": " to perform the reconstruction."}, {"start": 93.30000000000001, "end": 98.5, "text": " To produce the results presented in the paper, an Nvidia Titan X video card was used, which"}, {"start": 98.5, "end": 104.18, "text": " is currently one of the pricier pieces of equipment for consumers, but not so much for companies"}, {"start": 104.18, "end": 106.86000000000001, "text": " who are typically interested in these applications."}, {"start": 106.86000000000001, "end": 111.62, "text": " If we take into consideration the rate at which graphical hardware is improving, anyone"}, {"start": 111.62, "end": 115.86000000000001, "text": " will be able to run this at home in real time in a few years time."}, {"start": 115.86, "end": 121.1, "text": " The comparisons to previous works reveal that this technique is not only real time, but"}, {"start": 121.1, "end": 126.53999999999999, "text": " the quality of the results is mostly comparable, and in some cases, it surpasses previous"}, {"start": 126.53999999999999, "end": 127.53999999999999, "text": " methods."}, {"start": 127.54, "end": 154.98000000000002, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]