CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=nE5iVtwKerA
OpenAI’s Whisper Learned 680,000 Hours Of Speech!
❤️ Check out Anyscale and try it for free here: https://www.anyscale.com/papers 📝 The paper "Robust Speech Recognition via Large-Scale Weak Supervision" is available here: https://openai.com/blog/whisper/ Try it out (note: the Scholarly Stampede appears to be in order - we barely published the video and there are already longer wait times): https://huggingface.co/spaces/openai/whisper Source code: https://github.com/openai/whisper Lex transcriptions by Andrej Karpathy: https://karpathy.ai/lexicap/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Chapters: 0:00 Teaser 0:25 More features 0:40 Speed talking transcription 1:00 Accent transcription 1:28 96 more languages! 1:50 What about other methods? 2:05 680,000 hours! 2:14 Is this any good? 3:20 As good as humans? 4:32 The ultimate test! 5:15 What is all this good for? 6:13 2 more good news 6:40 So simple! 6:55 More training data Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai
And dear fellow scholars, this is two minute papers with Dr. Karojol Ney Fahir. OpenAI's new Whisper AI is able to listen to what we say and transcribe it. Your voice goes in and this text comes out like this. This is incredible and it is going to change everything. As you see, when running through these few sentences, it works with flying colors. Well, stay tuned because you will see if we were able to break it later this video. And can it be as good as a human? We will test that too. But first, let's try to break it with this speed talking person. This is the micro-recement presenting the most midget miniature motorcade of micro-recement. Each one has dramatic details for a facial precision paint job. Plus incredible micro-recement pocket place. That's just a police station, fire station, restaurant, Wow, that's going to be hard. So, let's see the result. Wow, that is incredible. And that's not all it can do so much more. For instance, it does accents too. Here is an example. One of the most famous line marks on the board of the three holes. And the method is that metal and the midgetions spot one hole. So good. Now, when talking about accents, I am here too. And I will try my luck later in this video as well. The results were interesting to say the least. But wait, this knows not only English, but scientists at OpenAI said, let's throw in 96 other languages too. Here is French, for example. Whisper is a system of reconnaissance, automatic to the parole, entrenez sur six sans pedis. And as you see, it also translates it into English. So cool. Now, this is all well and good. But wait a second, transcription APIs already exist. For instance, here on YouTube, you can also request those for many videos. So, what is new here? Why publish this paper? Is this better? Also, what do we get for the 680,000 hours of training? Well, let's have a look. This better be good. Wow! What happened here? This is not a good start. For the first site, it seems that we are not getting a great deal out of this AI at all. Look, here between the 20 to 40 decibel signal to noise range, which means a good quality speed signal, it is the highest. So, is it the best AI around for transcription? Well, not quite. You see, what we are also looking at is the word error rate here, which is subject to minimization. That means the smaller, the better. We noted that 20 to 40 decibels is considered good quality signal. Here, it has a higher error rate than previous techniques. But wait, look at that. When going to 5 to 10 decibels and below, these signals are so bad that we can barely tell them from noise. For instance, imagine sitting in a really loud pub and here is where whisper really shines. Here, it is the best. And this is a good paper. So, we have plenty more data on how it compares to a bunch of previous techniques. Look, once again, we have the word error rate. This is subject to minimization. Lower is better. From A to D, you see other previous automatic speech recognition systems and it beats all of them. And what do we have here? Now, hold on to your papers because can that really be, is it as good as a human? That can't be, right? Well, the answer is yes, it can be as good as a human. Kind of. You see, it outperforms these professional human transcription services and is at the very least competitive with the best ones. An AI that transcribes as well as a professional human does. Wow, this truly feels like we are living in a science fiction movie. What a time to be alive. Humans, okay, it is as good as many humans. That's all right, but does this pass the ultimate test for a speech AI? What would that be? Of course, that is the carotest. That would be me speaking with the crazy accent. Let's see, dear fellow scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. And dear fellow scanners, I don't know what's going on here. It got my name perfectly, perhaps that is a sign of a super intelligence that is in the making. Wow, the capitalization of two-minute papers is all right too. Now, dear fellow scanners, let's try this again. And now that is what I expected to happen. The regular speech part is transcribed well, and it flabbed my name. So, no super intelligence yet, at least not reliably. So, what is all this good for? Well, imagine that you are looking at this amazing interview from Lex Friedman on super intelligence. And it is one and a half hours. Yes, that is very short for Lex. Now, we know that they talk about immortality, but where exactly? Well, that's not a problem anymore, look. Andre Carpethi ran Whisper on every single episode of Lex's podcast, and there we go. This is the relevant part about immortality. That is incredible. Of course, you fellow scanners know that YouTube also helps us with its own transcription feature, or we can also look at the chapter markers, however, not all video and audio is on YouTube. And here comes the kicker, Whisper works everywhere. How cool is that? And here comes the best part. Two amazing news. One, it is open source, and two, not only that, but you can try it now too. I put a link to both of these in the video description, but as always, please be patient. Whenever we link to something, you fellow scanners are so excited to try it out. We have crashed a bunch of webpages before. This is what we call the scholarly stampede. So I hear you asking, okay, but what is under the hood here? If you have a closer look at the paper, you see that it is using a simple algorithm, a transformer with a vast dataset, and it can get very, very forwarded. You see here that it makes great use of that 680,000 hours of human speech, and languages other than English, and translation improves a great deal if we add more, and even the English part improves a bit too. So this indicates that if we gave it even more data, it might improve it even more. And don't forget, it can deal with noisy data really well. So adding more might not be as big of a challenge, and it is already as good as many professional humans. Wow, I can only imagine what this will be able to do just a couple more papers down the line. What a time to be alive. This episode is brought to you by AnySkill, the company behind Ray, the fastest growing open source framework for scalable AI and scalable Python. Thousands of organizations use Ray, including open AI, Uber, Amazon, Spotify, Netflix, and more. Ray less developers iterate faster by providing common infrastructure for scaling data in just and pre-processing, machine learning training, deep learning, hyperparameter tuning, model serving, and more. All while integrating seamlessly with the rest of the machine learning ecosystem. AnySkill is a fully managed Ray platform that allows teams to bring products to market faster by eliminating the need to manage infrastructure and by enabling new AI capabilities. Ray and AnySkill can do recommendation systems time series forecasting, document understanding, image processing, industrial automation, and more. Go to anyscale.com slash papers and try it out today. Our thanks to AnySkill for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " And dear fellow scholars, this is two minute papers with Dr. Karojol Ney Fahir."}, {"start": 4.8, "end": 11.36, "text": " OpenAI's new Whisper AI is able to listen to what we say and transcribe it."}, {"start": 11.36, "end": 16.4, "text": " Your voice goes in and this text comes out like this."}, {"start": 16.4, "end": 20.88, "text": " This is incredible and it is going to change everything."}, {"start": 20.88, "end": 26.64, "text": " As you see, when running through these few sentences, it works with flying colors."}, {"start": 26.64, "end": 32.480000000000004, "text": " Well, stay tuned because you will see if we were able to break it later this video."}, {"start": 32.480000000000004, "end": 35.760000000000005, "text": " And can it be as good as a human?"}, {"start": 35.760000000000005, "end": 37.68, "text": " We will test that too."}, {"start": 37.68, "end": 42.08, "text": " But first, let's try to break it with this speed talking person."}, {"start": 42.08, "end": 45.120000000000005, "text": " This is the micro-recement presenting the most midget miniature motorcade of micro-recement."}, {"start": 45.120000000000005, "end": 47.120000000000005, "text": " Each one has dramatic details for a facial precision paint job."}, {"start": 47.120000000000005, "end": 48.480000000000004, "text": " Plus incredible micro-recement pocket place."}, {"start": 48.480000000000004, "end": 50.0, "text": " That's just a police station, fire station, restaurant,"}, {"start": 50.0, "end": 52.24, "text": " Wow, that's going to be hard."}, {"start": 52.24, "end": 54.0, "text": " So, let's see the result."}, {"start": 54.0, "end": 57.84, "text": " Wow, that is incredible."}, {"start": 57.84, "end": 61.44, "text": " And that's not all it can do so much more."}, {"start": 61.44, "end": 64.24, "text": " For instance, it does accents too."}, {"start": 64.24, "end": 65.52, "text": " Here is an example."}, {"start": 65.52, "end": 69.76, "text": " One of the most famous line marks on the board of the three holes."}, {"start": 69.76, "end": 72.8, "text": " And the method is that metal and the midgetions spot one hole."}, {"start": 72.8, "end": 74.72, "text": " So good."}, {"start": 74.72, "end": 78.4, "text": " Now, when talking about accents, I am here too."}, {"start": 78.4, "end": 82.16, "text": " And I will try my luck later in this video as well."}, {"start": 82.16, "end": 85.6, "text": " The results were interesting to say the least."}, {"start": 85.6, "end": 88.72, "text": " But wait, this knows not only English,"}, {"start": 88.72, "end": 91.52, "text": " but scientists at OpenAI said,"}, {"start": 91.52, "end": 95.6, "text": " let's throw in 96 other languages too."}, {"start": 95.6, "end": 97.75999999999999, "text": " Here is French, for example."}, {"start": 97.75999999999999, "end": 99.92, "text": " Whisper is a system of reconnaissance,"}, {"start": 99.92, "end": 101.52, "text": " automatic to the parole,"}, {"start": 101.52, "end": 102.96, "text": " entrenez sur six sans pedis."}, {"start": 102.96, "end": 107.03999999999999, "text": " And as you see, it also translates it into English."}, {"start": 107.03999999999999, "end": 108.08, "text": " So cool."}, {"start": 108.08, "end": 110.08, "text": " Now, this is all well and good."}, {"start": 110.08, "end": 114.56, "text": " But wait a second, transcription APIs already exist."}, {"start": 114.56, "end": 116.56, "text": " For instance, here on YouTube,"}, {"start": 116.56, "end": 119.75999999999999, "text": " you can also request those for many videos."}, {"start": 119.75999999999999, "end": 121.92, "text": " So, what is new here?"}, {"start": 121.92, "end": 124.16, "text": " Why publish this paper?"}, {"start": 124.16, "end": 125.75999999999999, "text": " Is this better?"}, {"start": 125.75999999999999, "end": 131.44, "text": " Also, what do we get for the 680,000 hours of training?"}, {"start": 131.44, "end": 133.2, "text": " Well, let's have a look."}, {"start": 133.2, "end": 134.16, "text": " This better be good."}, {"start": 135.04, "end": 136.0, "text": " Wow!"}, {"start": 136.0, "end": 137.52, "text": " What happened here?"}, {"start": 137.52, "end": 139.44, "text": " This is not a good start."}, {"start": 139.44, "end": 142.56, "text": " For the first site, it seems that we are not getting"}, {"start": 142.56, "end": 145.6, "text": " a great deal out of this AI at all."}, {"start": 145.6, "end": 150.24, "text": " Look, here between the 20 to 40 decibel signal to noise range,"}, {"start": 150.24, "end": 152.96, "text": " which means a good quality speed signal,"}, {"start": 152.96, "end": 154.64, "text": " it is the highest."}, {"start": 154.64, "end": 158.16, "text": " So, is it the best AI around for transcription?"}, {"start": 158.16, "end": 159.84, "text": " Well, not quite."}, {"start": 159.84, "end": 161.92, "text": " You see, what we are also looking at"}, {"start": 161.92, "end": 163.92, "text": " is the word error rate here,"}, {"start": 163.92, "end": 166.96, "text": " which is subject to minimization."}, {"start": 166.96, "end": 169.68, "text": " That means the smaller, the better."}, {"start": 169.68, "end": 172.48000000000002, "text": " We noted that 20 to 40 decibels"}, {"start": 172.48000000000002, "end": 175.12, "text": " is considered good quality signal."}, {"start": 175.12, "end": 179.28, "text": " Here, it has a higher error rate than previous techniques."}, {"start": 179.28, "end": 181.36, "text": " But wait, look at that."}, {"start": 181.36, "end": 185.20000000000002, "text": " When going to 5 to 10 decibels and below,"}, {"start": 185.20000000000002, "end": 189.36, "text": " these signals are so bad that we can barely tell them from noise."}, {"start": 189.36, "end": 193.04000000000002, "text": " For instance, imagine sitting in a really loud pub"}, {"start": 193.04000000000002, "end": 196.24, "text": " and here is where whisper really shines."}, {"start": 196.24, "end": 198.48000000000002, "text": " Here, it is the best."}, {"start": 198.48000000000002, "end": 200.64000000000001, "text": " And this is a good paper."}, {"start": 200.64000000000001, "end": 204.0, "text": " So, we have plenty more data on how it compares"}, {"start": 204.0, "end": 206.16, "text": " to a bunch of previous techniques."}, {"start": 206.16, "end": 209.52, "text": " Look, once again, we have the word error rate."}, {"start": 209.52, "end": 211.92000000000002, "text": " This is subject to minimization."}, {"start": 211.92000000000002, "end": 213.36, "text": " Lower is better."}, {"start": 213.36, "end": 218.16000000000003, "text": " From A to D, you see other previous automatic speech recognition"}, {"start": 218.16000000000003, "end": 221.20000000000002, "text": " systems and it beats all of them."}, {"start": 221.20000000000002, "end": 223.52, "text": " And what do we have here?"}, {"start": 223.52, "end": 228.08, "text": " Now, hold on to your papers because can that really be,"}, {"start": 228.08, "end": 230.76000000000002, "text": " is it as good as a human?"}, {"start": 230.76000000000002, "end": 232.52, "text": " That can't be, right?"}, {"start": 232.52, "end": 237.64000000000001, "text": " Well, the answer is yes, it can be as good as a human."}, {"start": 237.64000000000001, "end": 238.56, "text": " Kind of."}, {"start": 238.56, "end": 242.84, "text": " You see, it outperforms these professional human transcription"}, {"start": 242.84, "end": 246.24, "text": " services and is at the very least competitive"}, {"start": 246.24, "end": 247.96, "text": " with the best ones."}, {"start": 247.96, "end": 253.24, "text": " An AI that transcribes as well as a professional human does."}, {"start": 253.24, "end": 256.04, "text": " Wow, this truly feels like we are living"}, {"start": 256.04, "end": 258.0, "text": " in a science fiction movie."}, {"start": 258.0, "end": 259.92, "text": " What a time to be alive."}, {"start": 259.92, "end": 263.72, "text": " Humans, okay, it is as good as many humans."}, {"start": 263.72, "end": 267.84000000000003, "text": " That's all right, but does this pass the ultimate test"}, {"start": 267.84000000000003, "end": 269.84000000000003, "text": " for a speech AI?"}, {"start": 269.84000000000003, "end": 271.32, "text": " What would that be?"}, {"start": 271.32, "end": 274.24, "text": " Of course, that is the carotest."}, {"start": 274.24, "end": 277.72, "text": " That would be me speaking with the crazy accent."}, {"start": 277.72, "end": 280.08, "text": " Let's see, dear fellow scholars,"}, {"start": 280.08, "end": 283.28, "text": " this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 283.28, "end": 288.0, "text": " And dear fellow scanners, I don't know what's going on here."}, {"start": 288.0, "end": 291.91999999999996, "text": " It got my name perfectly, perhaps that is a sign"}, {"start": 291.91999999999996, "end": 295.32, "text": " of a super intelligence that is in the making."}, {"start": 295.32, "end": 299.91999999999996, "text": " Wow, the capitalization of two-minute papers is all right too."}, {"start": 299.91999999999996, "end": 303.64, "text": " Now, dear fellow scanners, let's try this again."}, {"start": 303.64, "end": 307.0, "text": " And now that is what I expected to happen."}, {"start": 307.0, "end": 309.96, "text": " The regular speech part is transcribed well,"}, {"start": 309.96, "end": 311.88, "text": " and it flabbed my name."}, {"start": 311.88, "end": 316.52, "text": " So, no super intelligence yet, at least not reliably."}, {"start": 316.52, "end": 319.2, "text": " So, what is all this good for?"}, {"start": 319.2, "end": 323.04, "text": " Well, imagine that you are looking at this amazing interview"}, {"start": 323.04, "end": 326.24, "text": " from Lex Friedman on super intelligence."}, {"start": 326.24, "end": 328.4, "text": " And it is one and a half hours."}, {"start": 328.4, "end": 331.16, "text": " Yes, that is very short for Lex."}, {"start": 331.16, "end": 334.52, "text": " Now, we know that they talk about immortality,"}, {"start": 334.52, "end": 336.68, "text": " but where exactly?"}, {"start": 336.68, "end": 339.56, "text": " Well, that's not a problem anymore, look."}, {"start": 339.56, "end": 343.6, "text": " Andre Carpethi ran Whisper on every single episode"}, {"start": 343.6, "end": 346.64, "text": " of Lex's podcast, and there we go."}, {"start": 346.64, "end": 350.12, "text": " This is the relevant part about immortality."}, {"start": 350.12, "end": 351.88, "text": " That is incredible."}, {"start": 351.88, "end": 355.84000000000003, "text": " Of course, you fellow scanners know that YouTube also helps us"}, {"start": 355.84000000000003, "end": 358.16, "text": " with its own transcription feature,"}, {"start": 358.16, "end": 361.04, "text": " or we can also look at the chapter markers,"}, {"start": 361.04, "end": 365.64, "text": " however, not all video and audio is on YouTube."}, {"start": 365.64, "end": 369.88, "text": " And here comes the kicker, Whisper works everywhere."}, {"start": 369.88, "end": 371.71999999999997, "text": " How cool is that?"}, {"start": 371.71999999999997, "end": 374.2, "text": " And here comes the best part."}, {"start": 374.2, "end": 376.03999999999996, "text": " Two amazing news."}, {"start": 376.03999999999996, "end": 380.28, "text": " One, it is open source, and two, not only that,"}, {"start": 380.28, "end": 382.8, "text": " but you can try it now too."}, {"start": 382.8, "end": 385.8, "text": " I put a link to both of these in the video description,"}, {"start": 385.8, "end": 388.59999999999997, "text": " but as always, please be patient."}, {"start": 388.59999999999997, "end": 390.47999999999996, "text": " Whenever we link to something,"}, {"start": 390.47999999999996, "end": 393.76, "text": " you fellow scanners are so excited to try it out."}, {"start": 393.76, "end": 396.56, "text": " We have crashed a bunch of webpages before."}, {"start": 396.56, "end": 399.71999999999997, "text": " This is what we call the scholarly stampede."}, {"start": 399.71999999999997, "end": 404.71999999999997, "text": " So I hear you asking, okay, but what is under the hood here?"}, {"start": 404.96, "end": 407.2, "text": " If you have a closer look at the paper,"}, {"start": 407.2, "end": 410.52, "text": " you see that it is using a simple algorithm,"}, {"start": 410.52, "end": 413.48, "text": " a transformer with a vast dataset,"}, {"start": 413.48, "end": 416.64, "text": " and it can get very, very forwarded."}, {"start": 416.64, "end": 419.32, "text": " You see here that it makes great use"}, {"start": 419.32, "end": 423.52, "text": " of that 680,000 hours of human speech,"}, {"start": 423.52, "end": 425.79999999999995, "text": " and languages other than English,"}, {"start": 425.79999999999995, "end": 429.68, "text": " and translation improves a great deal if we add more,"}, {"start": 429.68, "end": 433.35999999999996, "text": " and even the English part improves a bit too."}, {"start": 433.35999999999996, "end": 437.4, "text": " So this indicates that if we gave it even more data,"}, {"start": 437.4, "end": 439.59999999999997, "text": " it might improve it even more."}, {"start": 439.59999999999997, "end": 443.64, "text": " And don't forget, it can deal with noisy data really well."}, {"start": 443.64, "end": 447.64, "text": " So adding more might not be as big of a challenge,"}, {"start": 447.64, "end": 452.24, "text": " and it is already as good as many professional humans."}, {"start": 452.24, "end": 456.32, "text": " Wow, I can only imagine what this will be able to do"}, {"start": 456.32, "end": 458.84000000000003, "text": " just a couple more papers down the line."}, {"start": 458.84000000000003, "end": 460.76, "text": " What a time to be alive."}, {"start": 460.76, "end": 463.92, "text": " This episode is brought to you by AnySkill,"}, {"start": 463.92, "end": 465.6, "text": " the company behind Ray,"}, {"start": 465.6, "end": 468.36, "text": " the fastest growing open source framework"}, {"start": 468.36, "end": 472.2, "text": " for scalable AI and scalable Python."}, {"start": 472.2, "end": 474.64, "text": " Thousands of organizations use Ray,"}, {"start": 474.64, "end": 479.64, "text": " including open AI, Uber, Amazon, Spotify, Netflix,"}, {"start": 479.88, "end": 480.84000000000003, "text": " and more."}, {"start": 480.84, "end": 483.88, "text": " Ray less developers iterate faster"}, {"start": 483.88, "end": 486.08, "text": " by providing common infrastructure"}, {"start": 486.08, "end": 489.0, "text": " for scaling data in just and pre-processing,"}, {"start": 489.0, "end": 491.84, "text": " machine learning training, deep learning,"}, {"start": 491.84, "end": 495.76, "text": " hyperparameter tuning, model serving, and more."}, {"start": 495.76, "end": 498.52, "text": " All while integrating seamlessly"}, {"start": 498.52, "end": 501.47999999999996, "text": " with the rest of the machine learning ecosystem."}, {"start": 501.47999999999996, "end": 504.35999999999996, "text": " AnySkill is a fully managed Ray platform"}, {"start": 504.35999999999996, "end": 508.4, "text": " that allows teams to bring products to market faster"}, {"start": 508.4, "end": 511.79999999999995, "text": " by eliminating the need to manage infrastructure"}, {"start": 511.79999999999995, "end": 515.12, "text": " and by enabling new AI capabilities."}, {"start": 515.12, "end": 518.76, "text": " Ray and AnySkill can do recommendation systems"}, {"start": 518.76, "end": 522.24, "text": " time series forecasting, document understanding,"}, {"start": 522.24, "end": 526.16, "text": " image processing, industrial automation, and more."}, {"start": 526.16, "end": 531.16, "text": " Go to anyscale.com slash papers and try it out today."}, {"start": 531.28, "end": 535.3199999999999, "text": " Our thanks to AnySkill for helping us make better videos for you."}, {"start": 535.3199999999999, "end": 537.56, "text": " Thanks for watching and for your generous support,"}, {"start": 537.56, "end": 539.56, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eM5jn8vY2OQ
OpenAI's DALL-E 2 Has Insane Capabilities! 🤖
❤️ Check out Runway and try it for free here: https://runwayml.com/papers/ Use the code TWOMINUTE at checkout to get 10% off! 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://openai.com/dall-e-2/ ☀️My free Master-level light transport course is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ 📝 Our Separable Subsurface Scattering paper with Activition-Blizzard: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ 📝 Our earlier paper with the caustics: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ Reynante Martinez, the master's page: https://www.reynantemartinez.com/ Rendered images: LuxCore Render / Sharlybg https://luxcorerender.org/wp-content/uploads/2017/12/Salon22XS.jpg https://luxcorerender.org/wp-content/uploads/2017/12/SSDark_01b.jpg Hotel scene: Badblender - https://www.blendswap.com/blend/30669 Path tracing links on Shadertoy: https://www.shadertoy.com/view/tsBBWW https://www.shadertoy.com/view/MtfGR4 https://www.shadertoy.com/view/Ns2fzy Caustics: https://cgcookie.com/projects/luxrender-caustics https://twitter.com/djbaskin/status/1514735924826963981 Dispersion: https://wiki.luxcorerender.org/Glass_Material_IOR_and_Dispersion Chapters: 0:00 Teaser 0:48 Light Transport 1:18 Variant generation 1:48 Experiment 1 2:20 Let's try it again! 3:40 Experiment 2 5:05 Experiment 3 6:34 Experiment 4 7:40 Indirect Illumination, dispersion, course 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai #dalle
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jolna Ifehir. Finally, this is my Happy Episode. Well, of course, I am happy in every episode, but this is going to be my Happy Happy Episode, if you will. Why is that? Well, buckle up, because today we are going to use OpenAI's DOLI2 a text to image AI, and we will see what it is made of. Can it create beautiful light transport effects or not? We will see through four beautiful experiments. For instance, this may sound like science fiction, but today we will also see if it can recreate this scene from a true master of digital 3D art. So, what is this light thing I keep talking about? A light transport simulation means a computer program that is able to compute the path of light rays to create beautiful images like this, and this, and this. And our key problem is that initially we only get noisy images, and it can take a long time for the simulator to eliminate this noise. So, can DOLI2 help with that? Well, how? For instance, it can perform a variant generation where in goes one image and the AI synthesizes other similar images. This is really cool, as it means that the AI has a good understanding of what it sees and can create different variations of it. And, wait a minute, are you thinking what I am thinking? Oh, yes, experiment number one. De-noising. Let's give this noisy input image from a light transport simulator, give it to the variant generator, and see if it is able to recreate the essence of the image, but without the noise. Let's see. Well, that is super interesting. It did not denoise the image, but it did something else. It tried to understand what the noise is in this context and found it to be some sort of gold powder. How cool is that? Based on the insights gained here, let's try again with a little less noise. Oh, yes, this difficult scene would normally take even up to days to compute correctly. Do you see these light tricks here? We would need to clean those up. So, variant generation, it's your turn again. And, look at that. Wow, we get a noise free image that captured the essence of our input. I cannot believe it. So good. Interestingly, it did not ignore the light tricks, but it thought that this is the texture of the object and synthesize the new ones accordingly. This actually means that Dolly too does what it is supposed to be doing, faithfully reproducing the scene and putting a different spin on it. So cool. And I think this concept could be supercharged by generating such a noisy input quickly, then denoising it with one of those handcrafted techniques for these images. These are typically not perfect, but they may be just good enough to kickstart the variant generator. I would love to see some more detailed experiments in this direction. Now, what else can this do? Well, experiment number two, my favourite, caustics. Oh, yes, these are beautiful patterns of reflected light that we see a lot of in real life and they produce some of the most beautiful images, any light transport simulation can offer. Yes, that's right. With such a simulation, we can compute these two. How cool is that? So now, let's ask Dolly too to create some of these for us. And the results are truly sublime. So regular caustics checkmark. And what about those fun, hard-shaped caustics when we put a ring in the middle of an open book? My goodness, the AI understands that and it really works. Loving it. However, if you look at those beautiful volumetric caustics, when running variant generation on that, it only kind of works. There are some rays of hope here, but otherwise, I feel that the AI thinks that this is some sort of laser experiment instead. And also, don't forget about Daniel Baskin's amazing results who created these drinks. But wait, we are light transport researchers here, so we don't look at the drink. What do we look at? Yes, of course, the caustics. Beautiful. And if we are looking at beautiful things, time for experiment number three, subsurface scattering. What is that? Oh boy, subsurface scattering is the beautiful effect of light penetrating our skin, milk, and other materials, and bouncing inside before coming out again. The lack of this effect is why the skin looks a little plasticky in older video games. However, light transport simulation researchers took care of that too. This is from our earlier paper with the Activision Blizzard Game Development Company. This is the same phenomenon, a simulation without subsurface scattering. And this one is with simulating this effect. And in real time. Beautiful. You can find the link to this paper in the video description. So, can an AI pull this off today? That's impossible, right? Well, it seems so. If I plainly ask for subsurface scattering from Dolly2, I did not get any of that. However, when prompting a text to image AI, we have to know not only what we wish to see, but how to get it out of the algorithm. So, if we ask for translucent objects with strong backlighting, bingo, Dolly2 can do this too. So good. Loving it. And now, hold onto your papers, because now is the time for our final experiment. Experiment number four, reproducing the work of a true master. If the previous experiment was nearly impossible, I really don't know what this is. Here is a beautiful little virtual world from Reynante Martinez, and it really speaks for itself. Now, let's put it into the variant generator, and see what Dolly2 is made of. Wow! Look at that. These are incredibly good. Not as good as the master himself, but I think the first law of papers should be invoked here. Wait, what is that? The first law of papers says that research is a process. Don't not look at where we are, look at where we will be two more papers down the line. And two more papers down the line, I have to say. I can imagine that we will get comparable images. I also love how it thinks that fingerprints are part of the liquid. It is a bit of a limitation, but a really beautiful one. What a time to be alive! And we haven't even talked about indirect illumination, dispersion, and many other amazing light transport effects. I really hope we will see some more experiments perhaps from you fellow scholars in this direction too. By the way, I have a master level light transport simulation course for all of you, free of charge, no strings attached, and we write a beautiful little simulator that can create this image and more. The link is in the video description. This episode has been supported by Ranway, professional and magical AI video editing for everyone. I often hear you fellow scholars asking, okay, these AI techniques look great, but when do I get to use them? And the answer is, right now, Ranway is an amazing video editor that can do many of the things that you see here in this series. For instance, it can automatically replace the background behind the person. It can do in-painting for videos amazingly well, and can do even text to image, image to image, you name it. No wonder it is used by editors, post-production teams, and creators at companies like CBS, Google, Vox, and many other. Make sure to go to RanwayML.com, slash papers, sign up, and try it for free today. And here comes the best part, use the code two minute at checkout, and get 10% off your first month. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jolna Ifehir."}, {"start": 4.5600000000000005, "end": 7.84, "text": " Finally, this is my Happy Episode."}, {"start": 7.84, "end": 16.12, "text": " Well, of course, I am happy in every episode, but this is going to be my Happy Happy Episode, if you will."}, {"start": 16.12, "end": 17.36, "text": " Why is that?"}, {"start": 17.36, "end": 27.6, "text": " Well, buckle up, because today we are going to use OpenAI's DOLI2 a text to image AI, and we will see what it is made of."}, {"start": 27.6, "end": 32.08, "text": " Can it create beautiful light transport effects or not?"}, {"start": 32.08, "end": 35.6, "text": " We will see through four beautiful experiments."}, {"start": 35.6, "end": 47.44, "text": " For instance, this may sound like science fiction, but today we will also see if it can recreate this scene from a true master of digital 3D art."}, {"start": 47.44, "end": 51.040000000000006, "text": " So, what is this light thing I keep talking about?"}, {"start": 51.04, "end": 63.36, "text": " A light transport simulation means a computer program that is able to compute the path of light rays to create beautiful images like this, and this, and this."}, {"start": 63.36, "end": 74.0, "text": " And our key problem is that initially we only get noisy images, and it can take a long time for the simulator to eliminate this noise."}, {"start": 74.0, "end": 77.03999999999999, "text": " So, can DOLI2 help with that?"}, {"start": 77.04, "end": 88.32000000000001, "text": " Well, how? For instance, it can perform a variant generation where in goes one image and the AI synthesizes other similar images."}, {"start": 88.32000000000001, "end": 97.60000000000001, "text": " This is really cool, as it means that the AI has a good understanding of what it sees and can create different variations of it."}, {"start": 97.60000000000001, "end": 102.24000000000001, "text": " And, wait a minute, are you thinking what I am thinking?"}, {"start": 102.24000000000001, "end": 105.2, "text": " Oh, yes, experiment number one."}, {"start": 105.2, "end": 120.48, "text": " De-noising. Let's give this noisy input image from a light transport simulator, give it to the variant generator, and see if it is able to recreate the essence of the image, but without the noise."}, {"start": 120.48, "end": 128.96, "text": " Let's see. Well, that is super interesting. It did not denoise the image, but it did something else."}, {"start": 128.96, "end": 137.20000000000002, "text": " It tried to understand what the noise is in this context and found it to be some sort of gold powder."}, {"start": 137.20000000000002, "end": 144.64000000000001, "text": " How cool is that? Based on the insights gained here, let's try again with a little less noise."}, {"start": 144.64000000000001, "end": 151.28, "text": " Oh, yes, this difficult scene would normally take even up to days to compute correctly."}, {"start": 151.28, "end": 156.08, "text": " Do you see these light tricks here? We would need to clean those up."}, {"start": 156.08, "end": 159.68, "text": " So, variant generation, it's your turn again."}, {"start": 159.68, "end": 167.60000000000002, "text": " And, look at that. Wow, we get a noise free image that captured the essence of our input."}, {"start": 167.60000000000002, "end": 181.44, "text": " I cannot believe it. So good. Interestingly, it did not ignore the light tricks, but it thought that this is the texture of the object and synthesize the new ones accordingly."}, {"start": 181.44, "end": 191.12, "text": " This actually means that Dolly too does what it is supposed to be doing, faithfully reproducing the scene and putting a different spin on it."}, {"start": 191.12, "end": 200.64, "text": " So cool. And I think this concept could be supercharged by generating such a noisy input quickly,"}, {"start": 200.64, "end": 205.6, "text": " then denoising it with one of those handcrafted techniques for these images."}, {"start": 205.6, "end": 212.88, "text": " These are typically not perfect, but they may be just good enough to kickstart the variant generator."}, {"start": 212.88, "end": 217.28, "text": " I would love to see some more detailed experiments in this direction."}, {"start": 217.28, "end": 224.88, "text": " Now, what else can this do? Well, experiment number two, my favourite, caustics."}, {"start": 224.88, "end": 234.79999999999998, "text": " Oh, yes, these are beautiful patterns of reflected light that we see a lot of in real life and they produce some of the most beautiful images,"}, {"start": 234.8, "end": 242.4, "text": " any light transport simulation can offer. Yes, that's right. With such a simulation, we can compute these two."}, {"start": 242.4, "end": 249.20000000000002, "text": " How cool is that? So now, let's ask Dolly too to create some of these for us."}, {"start": 249.20000000000002, "end": 255.36, "text": " And the results are truly sublime. So regular caustics checkmark."}, {"start": 255.36, "end": 262.64, "text": " And what about those fun, hard-shaped caustics when we put a ring in the middle of an open book?"}, {"start": 262.64, "end": 267.76, "text": " My goodness, the AI understands that and it really works."}, {"start": 267.76, "end": 272.96, "text": " Loving it. However, if you look at those beautiful volumetric caustics,"}, {"start": 272.96, "end": 277.84, "text": " when running variant generation on that, it only kind of works."}, {"start": 277.84, "end": 286.88, "text": " There are some rays of hope here, but otherwise, I feel that the AI thinks that this is some sort of laser experiment instead."}, {"start": 286.88, "end": 292.96, "text": " And also, don't forget about Daniel Baskin's amazing results who created these drinks."}, {"start": 292.96, "end": 298.71999999999997, "text": " But wait, we are light transport researchers here, so we don't look at the drink."}, {"start": 298.71999999999997, "end": 304.0, "text": " What do we look at? Yes, of course, the caustics. Beautiful."}, {"start": 304.0, "end": 311.36, "text": " And if we are looking at beautiful things, time for experiment number three, subsurface scattering."}, {"start": 311.36, "end": 319.28000000000003, "text": " What is that? Oh boy, subsurface scattering is the beautiful effect of light penetrating our skin,"}, {"start": 319.28000000000003, "end": 325.28000000000003, "text": " milk, and other materials, and bouncing inside before coming out again."}, {"start": 325.28000000000003, "end": 331.68, "text": " The lack of this effect is why the skin looks a little plasticky in older video games."}, {"start": 331.68, "end": 336.08000000000004, "text": " However, light transport simulation researchers took care of that too."}, {"start": 336.08, "end": 341.2, "text": " This is from our earlier paper with the Activision Blizzard Game Development Company."}, {"start": 341.2, "end": 346.24, "text": " This is the same phenomenon, a simulation without subsurface scattering."}, {"start": 346.24, "end": 351.68, "text": " And this one is with simulating this effect. And in real time."}, {"start": 351.68, "end": 356.15999999999997, "text": " Beautiful. You can find the link to this paper in the video description."}, {"start": 356.15999999999997, "end": 361.44, "text": " So, can an AI pull this off today? That's impossible, right?"}, {"start": 361.44, "end": 369.28, "text": " Well, it seems so. If I plainly ask for subsurface scattering from Dolly2, I did not get any of that."}, {"start": 369.28, "end": 376.0, "text": " However, when prompting a text to image AI, we have to know not only what we wish to see,"}, {"start": 376.0, "end": 384.08, "text": " but how to get it out of the algorithm. So, if we ask for translucent objects with strong backlighting,"}, {"start": 384.08, "end": 392.47999999999996, "text": " bingo, Dolly2 can do this too. So good. Loving it. And now, hold onto your papers, because now is the"}, {"start": 392.47999999999996, "end": 400.15999999999997, "text": " time for our final experiment. Experiment number four, reproducing the work of a true master."}, {"start": 400.15999999999997, "end": 405.59999999999997, "text": " If the previous experiment was nearly impossible, I really don't know what this is."}, {"start": 405.59999999999997, "end": 412.56, "text": " Here is a beautiful little virtual world from Reynante Martinez, and it really speaks for itself."}, {"start": 412.56, "end": 418.96, "text": " Now, let's put it into the variant generator, and see what Dolly2 is made of."}, {"start": 419.68, "end": 426.8, "text": " Wow! Look at that. These are incredibly good. Not as good as the master himself,"}, {"start": 426.8, "end": 433.04, "text": " but I think the first law of papers should be invoked here. Wait, what is that?"}, {"start": 433.04, "end": 438.88, "text": " The first law of papers says that research is a process. Don't not look at where we are,"}, {"start": 438.88, "end": 444.71999999999997, "text": " look at where we will be two more papers down the line. And two more papers down the line,"}, {"start": 444.71999999999997, "end": 452.71999999999997, "text": " I have to say. I can imagine that we will get comparable images. I also love how it thinks that"}, {"start": 452.71999999999997, "end": 459.76, "text": " fingerprints are part of the liquid. It is a bit of a limitation, but a really beautiful one."}, {"start": 459.76, "end": 466.08, "text": " What a time to be alive! And we haven't even talked about indirect illumination, dispersion,"}, {"start": 466.08, "end": 472.47999999999996, "text": " and many other amazing light transport effects. I really hope we will see some more experiments"}, {"start": 472.47999999999996, "end": 478.88, "text": " perhaps from you fellow scholars in this direction too. By the way, I have a master level light"}, {"start": 478.88, "end": 485.2, "text": " transport simulation course for all of you, free of charge, no strings attached, and we write"}, {"start": 485.2, "end": 492.08, "text": " a beautiful little simulator that can create this image and more. The link is in the video description."}, {"start": 492.08, "end": 499.52, "text": " This episode has been supported by Ranway, professional and magical AI video editing for everyone."}, {"start": 499.52, "end": 506.88, "text": " I often hear you fellow scholars asking, okay, these AI techniques look great, but when do I get"}, {"start": 506.88, "end": 513.92, "text": " to use them? And the answer is, right now, Ranway is an amazing video editor that can do many of"}, {"start": 513.92, "end": 520.0799999999999, "text": " the things that you see here in this series. For instance, it can automatically replace the"}, {"start": 520.08, "end": 528.24, "text": " background behind the person. It can do in-painting for videos amazingly well, and can do even text to"}, {"start": 528.24, "end": 534.96, "text": " image, image to image, you name it. No wonder it is used by editors, post-production teams,"}, {"start": 534.96, "end": 543.76, "text": " and creators at companies like CBS, Google, Vox, and many other. Make sure to go to RanwayML.com,"}, {"start": 543.76, "end": 551.04, "text": " slash papers, sign up, and try it for free today. And here comes the best part, use the code"}, {"start": 551.04, "end": 557.36, "text": " two minute at checkout, and get 10% off your first month. Thanks for watching and for your generous"}, {"start": 557.36, "end": 587.2, "text": " support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hPR5kU91Ef4
Google’s New AI Learns Table Tennis! 🏓
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "i-Sim2Real: Reinforcement Learning of Robotic Policies in Tight Human-Robot Interaction Loops" is available here: https://sites.google.com/view/is2r https://twitter.com/lgraesser3/status/1547942995139301376 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photos/table-tennis-passion-sport-1208385/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karajol Naifahir. Do you see this new table tennis robot? It barely played any games in the real world, yet it can return the ball more than a hundred times without failing. Wow! So, how is this even possible? Well, this is a seem-torial paper, which means that first the robot starts learning in a simulation. Open AI did this earlier by teaching the robot hand in a simulated environment to manipulate this ruby cube and Tesla also trains its cars in a computer simulation. Why? Well, in the real world some things are possible, but in a simulated world anything is possible. Yes, even this. And the self-driving car can safely train in this environment, and when it is ready, it can be safely brought into the real world. How cool is that? Now how do we apply this concept to table tennis? Hmm, well, in this case the robot would not move, but it would play a computer game in its head if you will. But not so fast. That is impossible. What are we simulating exactly? The machine doesn't even know how humans play. There is no one to play against. Now check this out. To solve this, first the robot asks for some human data. Look, it won't do anything, it just observes how we play. And it only requires the short sequences. Then, it builds a model of how we play and embeds us into a computer simulation where it plays against us over and over again without any real physical movement. It is training the brain, if you will. And now comes the key step. This knowledge from the computer simulation is now transferred to the real robot. And now, let's see if this computer game knowledge really translates to the real world. So can it return this ball? It can? Well, kind of. One more time. Okay, better. And now, well, it missed again. I see some signs of learning here, but this is not great. So is that it? So much for learning in a simulation and bringing this knowledge into the real world. Right? Well, do not despair because there is still hope. What can we do? Well, now it knows how it failed and how it interacted with the human. Yes, that is great. Why? Because it can feed this new knowledge back into the simulation. The simulation can now be fired up once again. And with all this knowledge, it can repeat until the simulation starts looking very similar to the real world. That is where the real fun begins. Why? Well, check this out. This is the previous version of this technique and as you see, this does not play well. So how about the new method? Now hold on to your papers and marvel at this rally. Ety-2 hits and not one mistake. This is so much better. Wow, this seem to real concept really works. And wait a minute, we are experienced fellow scholars here, so we have a question. If the training set was built from data when it played against this human being, does it really know how to play against only this person? Or did it obtain more general knowledge and can it play with others? Well, let's have a look. The robot hasn't played this person before. And let's see how the previous technique fares. Well, that was not a long rally. And neither is this one. And now let's see the new method. Oh my, this is so much better. It learns much more general information from the very limited human data it was given. So it can play really well with all kinds of players of different skill levels. Here you see a selection of them. And all this from learning in a computer game with just a tiny bit of human behavioral data. And it can even perform a rally of over a hundred hits. What a time to be alive. So does this get your mind going? What would you use this seem to real concept for? Let me know in the comments below. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Sign up and launch an instance and hold on to your papers because with Lambda GPU cloud, you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to lambda-labs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support. I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karajol Naifahir."}, {"start": 4.6000000000000005, "end": 7.42, "text": " Do you see this new table tennis robot?"}, {"start": 7.42, "end": 14.200000000000001, "text": " It barely played any games in the real world, yet it can return the ball more than a hundred"}, {"start": 14.200000000000001, "end": 16.0, "text": " times without failing."}, {"start": 16.0, "end": 17.0, "text": " Wow!"}, {"start": 17.0, "end": 19.6, "text": " So, how is this even possible?"}, {"start": 19.6, "end": 26.28, "text": " Well, this is a seem-torial paper, which means that first the robot starts learning in"}, {"start": 26.28, "end": 27.6, "text": " a simulation."}, {"start": 27.6, "end": 33.88, "text": " Open AI did this earlier by teaching the robot hand in a simulated environment to manipulate"}, {"start": 33.88, "end": 39.92, "text": " this ruby cube and Tesla also trains its cars in a computer simulation."}, {"start": 39.92, "end": 40.92, "text": " Why?"}, {"start": 40.92, "end": 48.400000000000006, "text": " Well, in the real world some things are possible, but in a simulated world anything is possible."}, {"start": 48.400000000000006, "end": 50.88, "text": " Yes, even this."}, {"start": 50.88, "end": 56.400000000000006, "text": " And the self-driving car can safely train in this environment, and when it is ready, it"}, {"start": 56.4, "end": 59.68, "text": " can be safely brought into the real world."}, {"start": 59.68, "end": 61.519999999999996, "text": " How cool is that?"}, {"start": 61.519999999999996, "end": 66.2, "text": " Now how do we apply this concept to table tennis?"}, {"start": 66.2, "end": 72.94, "text": " Hmm, well, in this case the robot would not move, but it would play a computer game in"}, {"start": 72.94, "end": 74.96, "text": " its head if you will."}, {"start": 74.96, "end": 77.16, "text": " But not so fast."}, {"start": 77.16, "end": 78.96, "text": " That is impossible."}, {"start": 78.96, "end": 81.08, "text": " What are we simulating exactly?"}, {"start": 81.08, "end": 84.6, "text": " The machine doesn't even know how humans play."}, {"start": 84.6, "end": 87.08, "text": " There is no one to play against."}, {"start": 87.08, "end": 88.91999999999999, "text": " Now check this out."}, {"start": 88.91999999999999, "end": 93.52, "text": " To solve this, first the robot asks for some human data."}, {"start": 93.52, "end": 99.16, "text": " Look, it won't do anything, it just observes how we play."}, {"start": 99.16, "end": 102.0, "text": " And it only requires the short sequences."}, {"start": 102.0, "end": 108.91999999999999, "text": " Then, it builds a model of how we play and embeds us into a computer simulation where"}, {"start": 108.92, "end": 115.0, "text": " it plays against us over and over again without any real physical movement."}, {"start": 115.0, "end": 117.72, "text": " It is training the brain, if you will."}, {"start": 117.72, "end": 119.68, "text": " And now comes the key step."}, {"start": 119.68, "end": 124.96000000000001, "text": " This knowledge from the computer simulation is now transferred to the real robot."}, {"start": 124.96000000000001, "end": 130.76, "text": " And now, let's see if this computer game knowledge really translates to the real world."}, {"start": 130.76, "end": 133.28, "text": " So can it return this ball?"}, {"start": 133.28, "end": 134.28, "text": " It can?"}, {"start": 134.28, "end": 136.4, "text": " Well, kind of."}, {"start": 136.4, "end": 137.4, "text": " One more time."}, {"start": 137.4, "end": 139.68, "text": " Okay, better."}, {"start": 139.68, "end": 142.64000000000001, "text": " And now, well, it missed again."}, {"start": 142.64000000000001, "end": 147.4, "text": " I see some signs of learning here, but this is not great."}, {"start": 147.4, "end": 149.16, "text": " So is that it?"}, {"start": 149.16, "end": 154.4, "text": " So much for learning in a simulation and bringing this knowledge into the real world."}, {"start": 154.4, "end": 155.4, "text": " Right?"}, {"start": 155.4, "end": 159.56, "text": " Well, do not despair because there is still hope."}, {"start": 159.56, "end": 160.56, "text": " What can we do?"}, {"start": 160.56, "end": 166.44, "text": " Well, now it knows how it failed and how it interacted with the human."}, {"start": 166.44, "end": 168.68, "text": " Yes, that is great."}, {"start": 168.68, "end": 170.07999999999998, "text": " Why?"}, {"start": 170.07999999999998, "end": 174.76, "text": " Because it can feed this new knowledge back into the simulation."}, {"start": 174.76, "end": 178.6, "text": " The simulation can now be fired up once again."}, {"start": 178.6, "end": 185.04, "text": " And with all this knowledge, it can repeat until the simulation starts looking very similar"}, {"start": 185.04, "end": 186.76, "text": " to the real world."}, {"start": 186.76, "end": 189.16, "text": " That is where the real fun begins."}, {"start": 189.16, "end": 190.16, "text": " Why?"}, {"start": 190.16, "end": 192.04, "text": " Well, check this out."}, {"start": 192.04, "end": 198.04, "text": " This is the previous version of this technique and as you see, this does not play well."}, {"start": 198.04, "end": 200.44, "text": " So how about the new method?"}, {"start": 200.44, "end": 204.95999999999998, "text": " Now hold on to your papers and marvel at this rally."}, {"start": 204.95999999999998, "end": 209.32, "text": " Ety-2 hits and not one mistake."}, {"start": 209.32, "end": 211.12, "text": " This is so much better."}, {"start": 211.12, "end": 215.64, "text": " Wow, this seem to real concept really works."}, {"start": 215.64, "end": 222.0, "text": " And wait a minute, we are experienced fellow scholars here, so we have a question."}, {"start": 222.0, "end": 227.44, "text": " If the training set was built from data when it played against this human being, does"}, {"start": 227.44, "end": 231.68, "text": " it really know how to play against only this person?"}, {"start": 231.68, "end": 236.56, "text": " Or did it obtain more general knowledge and can it play with others?"}, {"start": 236.56, "end": 238.56, "text": " Well, let's have a look."}, {"start": 238.56, "end": 241.76, "text": " The robot hasn't played this person before."}, {"start": 241.76, "end": 245.2, "text": " And let's see how the previous technique fares."}, {"start": 245.2, "end": 248.84, "text": " Well, that was not a long rally."}, {"start": 248.84, "end": 250.72, "text": " And neither is this one."}, {"start": 250.72, "end": 253.52, "text": " And now let's see the new method."}, {"start": 253.52, "end": 257.4, "text": " Oh my, this is so much better."}, {"start": 257.4, "end": 263.44, "text": " It learns much more general information from the very limited human data it was given."}, {"start": 263.44, "end": 269.16, "text": " So it can play really well with all kinds of players of different skill levels."}, {"start": 269.16, "end": 271.32, "text": " Here you see a selection of them."}, {"start": 271.32, "end": 277.04, "text": " And all this from learning in a computer game with just a tiny bit of human behavioral"}, {"start": 277.04, "end": 278.2, "text": " data."}, {"start": 278.2, "end": 283.2, "text": " And it can even perform a rally of over a hundred hits."}, {"start": 283.2, "end": 284.92, "text": " What a time to be alive."}, {"start": 284.92, "end": 287.12, "text": " So does this get your mind going?"}, {"start": 287.12, "end": 290.28, "text": " What would you use this seem to real concept for?"}, {"start": 290.28, "end": 292.08, "text": " Let me know in the comments below."}, {"start": 292.08, "end": 299.03999999999996, "text": " If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices"}, {"start": 299.03999999999996, "end": 302.48, "text": " in the world for GPU cloud compute."}, {"start": 302.48, "end": 305.44, "text": " No commitments or negotiation required."}, {"start": 305.44, "end": 312.76, "text": " Sign up and launch an instance and hold on to your papers because with Lambda GPU cloud,"}, {"start": 312.76, "end": 323.04, "text": " you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 323.04, "end": 325.64, "text": " That's 73% savings."}, {"start": 325.64, "end": 329.12, "text": " Did I mention they also offer persistent storage?"}, {"start": 329.12, "end": 337.28000000000003, "text": " So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances,"}, {"start": 337.28000000000003, "end": 339.64, "text": " workstations or servers."}, {"start": 339.64, "end": 346.04, "text": " Make sure to go to lambda-labs.com slash papers to sign up for one of their amazing GPU"}, {"start": 346.04, "end": 347.32, "text": " instances today."}, {"start": 347.32, "end": 349.72, "text": " Thanks for watching and for your generous support."}, {"start": 349.72, "end": 359.72, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bT8e1EV5-ic
Stable Diffusion Is Getting Outrageously Good! 🤯
❤️ Check out Anyscale and try it for free here: https://www.anyscale.com/papers 📝 The paper "High-Resolution Image Synthesis with Latent Diffusion Models" is available here: https://ommer-lab.com/research/latent-diffusion-models/ https://github.com/mallorbc/stable-diffusion-klms-gui You can also try Stable diffusion for free here: https://huggingface.co/spaces/stabilityai/stable-diffusion Credits: 1. Prompt-image repository https://lexica.art + Variants from photos https://twitter.com/sharifshameem/status/157177206133663334 2. Infinite zoom https://twitter.com/matthen2/status/1564608723636654093 + how to do it https://twitter.com/matthen2/status/1564608773485895692 3. Lego to reality https://twitter.com/matthen2/status/156609779409551360 5. 2D to 3D https://twitter.com/thibaudz/status/1566136808504786949 6. Cat Knight https://hostux.social/@valere/108939000926741542 7. Drawing to image https://www.reddit.com/r/MachineLearning/comments/x5dwm5/p_apple_pencil_with_the_power_of_local_stable/ 8. Image to image https://sciprogramming.com/community/index.php?topic=2081.0 9. Variant generation easier https://twitter.com/Buntworthy/status/1566744186153484288 + https://github.com/justinpinkney/stable-diffusion + https://github.com/gradio-app/gradio 10. Texture synthesis https://twitter.com/metasemantic/status/1568997322805100547 + https://twitter.com/dekapoppo/status/1571913696523489280 11. Newer version of stable_diffusion_videos - https://twitter.com/_nateraw/status/1569315074824871936 Interpolation: https://twitter.com/xsteenbrugge/status/1558508866463219712 Full video of interpolation: https://www.youtube.com/watch?v=Bo3VZCjDhGI Thumbnail source images: Anjney Midha 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 Intro 0:30 Stable Diffusion 1:20 AI art repository 2:14 Infinite zoom 2:24 Lego to reality 2:52 Creating 3D images 3:16 Cat Knight 3:50 The rest of the owl 4:03 Image to image! 4:43 Variant generation 5:02 Texture synthesis 5:55 Stable Diffusion video generator 6:20 Free and open for all of us! Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zone Fahir. Today, you will see the power of human ingenuity supercharged by an AI. As we are living the advent of AI-based image generation, we now have several tools that are so easy to use, we just enter a piece of text and out comes a beautiful image. Now, you're asking, okay Karo, they are easy to use, but for whom? Well, good news. We now have a new solution called stable diffusion, where the model weights and the full source code are available for free for everyone. Finally, we talked a bit about this before, and once again, I cannot overstate how amazing this is. I am completely spellbound by how the community has worked together to bring this project to life, and you fellow scholars just kept on improving it, and it harnesses the greatest asset of humanity, and that is the power of the community working together, and I cannot believe how much stable diffusion has improved in just the last few weeks. Don't believe it? Well, let's have a look together, and 10 amazing examples of how the community is already using it. One, today, the image generation works so inexpensively that we don't even need to necessarily generate our own. We can even look at this amazing repository where we enter the prompt and can find thousands and thousands of generated images for death concept. Yes, even for Napoleon cats, we have thousands of hits. So good. Now, additionally, we can also add a twist to it by photographing something in real life, obtaining a text prompt for it, and bam! It finds similar images that were synthesized by stable diffusion. This is very a generation of sorts, but piggybacking on images that have been synthesized already, therefore we can choose from a large gallery of these works. Two, by using a little trickery and the image-impaining feature, we can now create these amazing infinite zoom images. So good! Three, whenever we build something really cool with Legos, we can now ask stable diffusion to reimagine what it would look like if it were a real object. The results are by no means perfect, but based on what it comes up with, it really seems to understand what is being built here and what its real counterpart would look like. I love it! Four, after generating a flat, 2D image with stable diffusion, with other techniques, we can obtain a depth map which describes how four different objects are from the camera. Now that is something that we've seen before. However, now in Adobe's After Effects, look, we can create this little video with a parallax effect. Absolutely incredible! Five, have a look at this catnite. I love the eyes and all of these gorgeous details on the armor. This image really tells the story, but what is even better is that not only the prompt is available, but also stable diffusion is a free and open source model, so we can pop the hood, reuse the same parameters as the author, and get a reproduction of the very same image. And it is also much easier to edit it this way if we wish to see anything changed. Six, if we are not the most skilled artist, we can draw a really rudimentary owl, handed to the AI, and it will draw the rest of this fine owl. Seven, and if you think the drawing to image example was amazing, now hold onto your papers for this one. This fellow scholar had a crazy idea. Look, these screenshots of old Sierra video games were given to the algorithm, and there is no way, right? Well, let's see. Oh wow! Look at that! The results are absolutely incredible. I love how closely it follows the framing and the mood of the original photos. I have to be honest, some of these feel good to go as they are. What a time to be alive! Eight, with these new web apps, variant generation is now way easier and faster than before. It is now as simple as dropping in an image. By the way, a link to each of these is available in the video description, and their source code is available as well. Nine, in an earlier episode, we had a look at how artists are already using Dolly II in the industry to make a photo of something and miraculously, extended almost infinitely. This is called texture synthesis, and no seems anywhere to be seen. And now, deep fellow scholars, seamless texture generation, is now possible, in stable diffusion too. Not too many years ago, we needed not only a proper handcrafted computer graphics algorithm to even have a fighting chance to create something like this, but implementing a bunch of these techniques was also required because different algorithms did well on different examples. And now, just one tool that can do it all. How cool is that? And 10, stable diffusion itself is also being improved. Oh yes, this new version adds super-resolution to the mix which enables us to synthesize even more details and even higher resolution images with it. This thing is improving so quickly, we can barely keep up with it. So, which one was your favorite? Let me know in the comments below. And once again, this is my favorite type of work, which is free and open for everyone. So, I would like you fellow scholars to also take out your digital wrenches and create something new and amazing. Let the experiments begin. This episode is brought to you by AnySkill. The company behind Ray, the fastest growing open source framework for scalable AI and scalable Python. Thousands of organizations use Ray, including open AI, Uber, Amazon, Spotify, Netflix, and more. Ray, less developers, iterate faster by providing common infrastructure for scaling data in just and pre-processing, machine learning training, deep learning, hyperparameter tuning, model serving, and more. All while integrating seamlessly with the rest of the machine learning ecosystem. AnySkill is a fully managed Ray platform that allows teams to bring products to market faster by eliminating the need to manage infrastructure and by enabling new AI capabilities. Ray and AnySkill can do recommendation systems, time series forecasting, document understanding, image processing, industrial automation, and more. Go to AnySkill.com slash peepers and try it out today. Our thanks to AnySkill for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zone Fahir."}, {"start": 4.5600000000000005, "end": 11.040000000000001, "text": " Today, you will see the power of human ingenuity supercharged by an AI."}, {"start": 11.040000000000001, "end": 14.88, "text": " As we are living the advent of AI-based image generation,"}, {"start": 14.88, "end": 19.04, "text": " we now have several tools that are so easy to use,"}, {"start": 19.04, "end": 24.16, "text": " we just enter a piece of text and out comes a beautiful image."}, {"start": 24.16, "end": 29.76, "text": " Now, you're asking, okay Karo, they are easy to use, but for whom?"}, {"start": 29.76, "end": 34.96, "text": " Well, good news. We now have a new solution called stable diffusion,"}, {"start": 34.96, "end": 41.04, "text": " where the model weights and the full source code are available for free for everyone."}, {"start": 41.04, "end": 48.24, "text": " Finally, we talked a bit about this before, and once again, I cannot overstate how amazing this is."}, {"start": 48.24, "end": 55.040000000000006, "text": " I am completely spellbound by how the community has worked together to bring this project to life,"}, {"start": 55.04, "end": 62.64, "text": " and you fellow scholars just kept on improving it, and it harnesses the greatest asset of humanity,"}, {"start": 62.64, "end": 66.88, "text": " and that is the power of the community working together,"}, {"start": 66.88, "end": 73.84, "text": " and I cannot believe how much stable diffusion has improved in just the last few weeks."}, {"start": 73.84, "end": 76.88, "text": " Don't believe it? Well, let's have a look together,"}, {"start": 76.88, "end": 82.16, "text": " and 10 amazing examples of how the community is already using it."}, {"start": 82.16, "end": 91.36, "text": " One, today, the image generation works so inexpensively that we don't even need to necessarily generate our own."}, {"start": 91.36, "end": 102.08, "text": " We can even look at this amazing repository where we enter the prompt and can find thousands and thousands of generated images for death concept."}, {"start": 102.08, "end": 108.0, "text": " Yes, even for Napoleon cats, we have thousands of hits. So good."}, {"start": 108.0, "end": 114.4, "text": " Now, additionally, we can also add a twist to it by photographing something in real life,"}, {"start": 114.4, "end": 117.76, "text": " obtaining a text prompt for it, and bam!"}, {"start": 117.76, "end": 122.72, "text": " It finds similar images that were synthesized by stable diffusion."}, {"start": 122.72, "end": 130.08, "text": " This is very a generation of sorts, but piggybacking on images that have been synthesized already,"}, {"start": 130.08, "end": 134.08, "text": " therefore we can choose from a large gallery of these works."}, {"start": 134.08, "end": 138.88000000000002, "text": " Two, by using a little trickery and the image-impaining feature,"}, {"start": 138.88000000000002, "end": 143.44, "text": " we can now create these amazing infinite zoom images."}, {"start": 143.44, "end": 144.48000000000002, "text": " So good!"}, {"start": 144.48000000000002, "end": 148.48000000000002, "text": " Three, whenever we build something really cool with Legos,"}, {"start": 148.48000000000002, "end": 155.76000000000002, "text": " we can now ask stable diffusion to reimagine what it would look like if it were a real object."}, {"start": 155.76000000000002, "end": 158.64000000000001, "text": " The results are by no means perfect,"}, {"start": 158.64, "end": 164.79999999999998, "text": " but based on what it comes up with, it really seems to understand what is being built here"}, {"start": 164.79999999999998, "end": 167.67999999999998, "text": " and what its real counterpart would look like."}, {"start": 167.67999999999998, "end": 168.64, "text": " I love it!"}, {"start": 169.44, "end": 171.76, "text": " Four, after generating a flat,"}, {"start": 171.76, "end": 175.35999999999999, "text": " 2D image with stable diffusion, with other techniques,"}, {"start": 175.35999999999999, "end": 182.39999999999998, "text": " we can obtain a depth map which describes how four different objects are from the camera."}, {"start": 182.39999999999998, "end": 185.44, "text": " Now that is something that we've seen before."}, {"start": 185.44, "end": 189.12, "text": " However, now in Adobe's After Effects,"}, {"start": 189.12, "end": 193.76, "text": " look, we can create this little video with a parallax effect."}, {"start": 193.76, "end": 196.0, "text": " Absolutely incredible!"}, {"start": 196.0, "end": 198.88, "text": " Five, have a look at this catnite."}, {"start": 198.88, "end": 203.92, "text": " I love the eyes and all of these gorgeous details on the armor."}, {"start": 203.92, "end": 206.56, "text": " This image really tells the story,"}, {"start": 206.56, "end": 210.88, "text": " but what is even better is that not only the prompt is available,"}, {"start": 210.88, "end": 215.76, "text": " but also stable diffusion is a free and open source model,"}, {"start": 215.76, "end": 220.16, "text": " so we can pop the hood, reuse the same parameters as the author,"}, {"start": 220.16, "end": 224.0, "text": " and get a reproduction of the very same image."}, {"start": 224.0, "end": 230.07999999999998, "text": " And it is also much easier to edit it this way if we wish to see anything changed."}, {"start": 230.07999999999998, "end": 233.28, "text": " Six, if we are not the most skilled artist,"}, {"start": 233.28, "end": 236.4, "text": " we can draw a really rudimentary owl,"}, {"start": 236.4, "end": 241.12, "text": " handed to the AI, and it will draw the rest of this fine owl."}, {"start": 241.84, "end": 246.48000000000002, "text": " Seven, and if you think the drawing to image example was amazing,"}, {"start": 246.48000000000002, "end": 249.04000000000002, "text": " now hold onto your papers for this one."}, {"start": 249.04000000000002, "end": 252.64000000000001, "text": " This fellow scholar had a crazy idea."}, {"start": 252.64000000000001, "end": 258.16, "text": " Look, these screenshots of old Sierra video games were given to the algorithm,"}, {"start": 258.16, "end": 260.4, "text": " and there is no way, right?"}, {"start": 261.12, "end": 262.32, "text": " Well, let's see."}, {"start": 263.04, "end": 264.32, "text": " Oh wow!"}, {"start": 264.32, "end": 265.68, "text": " Look at that!"}, {"start": 265.68, "end": 268.8, "text": " The results are absolutely incredible."}, {"start": 268.8, "end": 275.12, "text": " I love how closely it follows the framing and the mood of the original photos."}, {"start": 275.12, "end": 279.84000000000003, "text": " I have to be honest, some of these feel good to go as they are."}, {"start": 279.84000000000003, "end": 281.84000000000003, "text": " What a time to be alive!"}, {"start": 283.04, "end": 285.28000000000003, "text": " Eight, with these new web apps,"}, {"start": 285.28000000000003, "end": 290.4, "text": " variant generation is now way easier and faster than before."}, {"start": 290.4, "end": 293.92, "text": " It is now as simple as dropping in an image."}, {"start": 293.92, "end": 298.40000000000003, "text": " By the way, a link to each of these is available in the video description,"}, {"start": 298.40000000000003, "end": 300.96000000000004, "text": " and their source code is available as well."}, {"start": 301.68, "end": 306.96000000000004, "text": " Nine, in an earlier episode, we had a look at how artists are already using"}, {"start": 306.96000000000004, "end": 311.20000000000005, "text": " Dolly II in the industry to make a photo of something"}, {"start": 311.20000000000005, "end": 315.52000000000004, "text": " and miraculously, extended almost infinitely."}, {"start": 315.52000000000004, "end": 321.44, "text": " This is called texture synthesis, and no seems anywhere to be seen."}, {"start": 321.44, "end": 325.6, "text": " And now, deep fellow scholars, seamless texture generation,"}, {"start": 325.6, "end": 328.96, "text": " is now possible, in stable diffusion too."}, {"start": 328.96, "end": 333.04, "text": " Not too many years ago, we needed not only a proper"}, {"start": 333.04, "end": 337.76, "text": " handcrafted computer graphics algorithm to even have a fighting chance"}, {"start": 337.76, "end": 339.52, "text": " to create something like this,"}, {"start": 339.52, "end": 343.68, "text": " but implementing a bunch of these techniques was also required"}, {"start": 343.68, "end": 348.0, "text": " because different algorithms did well on different examples."}, {"start": 348.0, "end": 351.52, "text": " And now, just one tool that can do it all."}, {"start": 351.52, "end": 352.88, "text": " How cool is that?"}, {"start": 354.0, "end": 358.72, "text": " And 10, stable diffusion itself is also being improved."}, {"start": 359.36, "end": 363.84, "text": " Oh yes, this new version adds super-resolution to the mix"}, {"start": 363.84, "end": 367.44, "text": " which enables us to synthesize even more details"}, {"start": 367.44, "end": 370.56, "text": " and even higher resolution images with it."}, {"start": 370.56, "end": 375.12, "text": " This thing is improving so quickly, we can barely keep up with it."}, {"start": 375.12, "end": 378.08, "text": " So, which one was your favorite?"}, {"start": 378.08, "end": 379.84000000000003, "text": " Let me know in the comments below."}, {"start": 379.84000000000003, "end": 383.52, "text": " And once again, this is my favorite type of work,"}, {"start": 383.52, "end": 386.48, "text": " which is free and open for everyone."}, {"start": 386.48, "end": 390.56, "text": " So, I would like you fellow scholars to also take out your digital"}, {"start": 390.56, "end": 394.48, "text": " wrenches and create something new and amazing."}, {"start": 394.48, "end": 396.56, "text": " Let the experiments begin."}, {"start": 396.56, "end": 399.68, "text": " This episode is brought to you by AnySkill."}, {"start": 399.68, "end": 404.08, "text": " The company behind Ray, the fastest growing open source framework"}, {"start": 404.08, "end": 408.08, "text": " for scalable AI and scalable Python."}, {"start": 408.08, "end": 410.47999999999996, "text": " Thousands of organizations use Ray,"}, {"start": 410.47999999999996, "end": 416.71999999999997, "text": " including open AI, Uber, Amazon, Spotify, Netflix, and more."}, {"start": 416.71999999999997, "end": 421.91999999999996, "text": " Ray, less developers, iterate faster by providing common infrastructure"}, {"start": 421.91999999999996, "end": 424.88, "text": " for scaling data in just and pre-processing,"}, {"start": 424.88, "end": 427.68, "text": " machine learning training, deep learning,"}, {"start": 427.68, "end": 431.59999999999997, "text": " hyperparameter tuning, model serving, and more."}, {"start": 431.6, "end": 437.28000000000003, "text": " All while integrating seamlessly with the rest of the machine learning ecosystem."}, {"start": 437.28000000000003, "end": 441.76000000000005, "text": " AnySkill is a fully managed Ray platform that allows teams"}, {"start": 441.76000000000005, "end": 445.76000000000005, "text": " to bring products to market faster by eliminating the need"}, {"start": 445.76000000000005, "end": 450.88, "text": " to manage infrastructure and by enabling new AI capabilities."}, {"start": 450.88, "end": 454.56, "text": " Ray and AnySkill can do recommendation systems,"}, {"start": 454.56, "end": 458.0, "text": " time series forecasting, document understanding,"}, {"start": 458.0, "end": 462.0, "text": " image processing, industrial automation, and more."}, {"start": 462.0, "end": 467.04, "text": " Go to AnySkill.com slash peepers and try it out today."}, {"start": 467.04, "end": 471.12, "text": " Our thanks to AnySkill for helping us make better videos for you."}, {"start": 471.12, "end": 473.44, "text": " Thanks for watching and for your generous support,"}, {"start": 473.44, "end": 488.88, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=k54cpsAbMn4
NVIDIA’s Amazing AI Clones Your Voice! 🤐
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "One TTS Alignment To Rule Them All" is available here: https://arxiv.org/abs/2108.10447 Early access: https://developer.nvidia.com/riva/studio-early-access 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia
And these fellow scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to clone real human voices using an AI. How? Well, in an earlier Nvidia keynote, we had a look at Jensen Jr. an AI-powered virtual assistant of Nvidia CEO Jensen Huang. It could do this. Synthetic biology is about designing biological systems at multiple levels from individual molecules. Look at the face of that problem. I love how it also uses hand gestures that go really well with the explanation. These virtual AI assistants are going to appear everywhere to help you with your daily tasks. For instance, in your car, the promise is that they will be able to recognize you as the owner of the car, recommend shows nearby, and even drive you there. These omniverse avatars may also help us order our favorite burgers too. And we won't even need to push buttons on a touchscreen. We just need to say what we wish to eat, and the assistant will answer and take our orders, perhaps later, even in a familiar person's voice. How cool is that? And today, I am going to ask you to imagine a future where we can all have our toy jensen's or our own virtual assistants with our own voice. All of us. That sounds really cool. So is that in the far future? No, not at all. Today, I have the amazing opportunity to show you a bit more about the AI that makes this voice synthesis happen. And yes, you will hear things that are only available here at two minute papers. So what is all this about? This work is an AI-based technique that takes samples of our voice and can then clone it. Let's give it a try. This is Jamil from NVIDIA who was kind enough to record these voice snippets. Listen. I think they have to change that. Further details are expected later. Okay, so how much of this do we need to train the AI? Well, not the entire life recordings of the test subject, but much less. See 30 minutes of these voice samples. The technique asks us to see these sentences and analyzes the timbre, prosody and the rhythm of our voice, which is quite a task. And what can it do afterwards? Well, hold on to your papers because Jamil, not the real one, the clone AI Jamil has a scholarly message for you that I wrote. This is a voice line generated by an AI. Here if you fell as scholars are going to notice, the first law of papers says that research is a process. Do not look at where we are. Look at where we will be two more papers down the line. The third law of papers says that a bad researcher fails 100% of the time, while a good one only fails 99% of the time. Hence, what you see here is always just 1% of the work that was done. So what do you think about these voice lines? Are they real? Or are they synthesized? Well, of course, it is not perfect. I think that most of you are able to tell that these voice lines were synthesized, but my opinion is that these are easily good enough for a helpful human-like virtual assistant. And really, how cool is that? Cloning a human voice from half an hour worth of sound samples. What a time to be alive. Note that I have been really tough with them, these are some long scholarly sentences that would give a challenge to any of these algorithms. Now, if you are one of our earlier fellow scholars, you might remember that a few years ago, we talked about an AI technique called Tachotron, which can perform voice cloning from a really short few second-long sample. I have only heard of simple shorter sentences for that, and what is new here is that this new technique takes more data, but in return offers higher quality. But it doesn't stop there. It does even more. This new method is easier to train, and it also generalizes to more languages better. And these are already good enough to be used in real products, so I wonder what a technique like this will look like just two more papers down the line. Maybe my voice here on two-minute papers could be synthesized by an AI. Maybe it already is. Would that be a good thing? What do you think? Also a virtual caroy for you to read your daily dose of papers. Hmm, actually, since Nvidia has a great track record of putting these amazing tools into everyone's hands if you are interested, there is already an early access program where you can apply. I hope that some of you will be able to try this and let us know in the comments below. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " And these fellow scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.72, "end": 10.16, "text": " Today, we are going to clone real human voices using an AI."}, {"start": 10.16, "end": 11.16, "text": " How?"}, {"start": 11.16, "end": 16.240000000000002, "text": " Well, in an earlier Nvidia keynote, we had a look at Jensen Jr."}, {"start": 16.240000000000002, "end": 21.52, "text": " an AI-powered virtual assistant of Nvidia CEO Jensen Huang."}, {"start": 21.52, "end": 23.32, "text": " It could do this."}, {"start": 23.32, "end": 29.32, "text": " Synthetic biology is about designing biological systems at multiple levels from individual molecules."}, {"start": 29.32, "end": 31.8, "text": " Look at the face of that problem."}, {"start": 31.8, "end": 38.480000000000004, "text": " I love how it also uses hand gestures that go really well with the explanation."}, {"start": 38.480000000000004, "end": 44.8, "text": " These virtual AI assistants are going to appear everywhere to help you with your daily tasks."}, {"start": 44.8, "end": 50.44, "text": " For instance, in your car, the promise is that they will be able to recognize you as the"}, {"start": 50.44, "end": 56.64, "text": " owner of the car, recommend shows nearby, and even drive you there."}, {"start": 56.64, "end": 62.28, "text": " These omniverse avatars may also help us order our favorite burgers too."}, {"start": 62.28, "end": 66.04, "text": " And we won't even need to push buttons on a touchscreen."}, {"start": 66.04, "end": 72.44, "text": " We just need to say what we wish to eat, and the assistant will answer and take our orders,"}, {"start": 72.44, "end": 76.44, "text": " perhaps later, even in a familiar person's voice."}, {"start": 76.44, "end": 78.2, "text": " How cool is that?"}, {"start": 78.2, "end": 84.28, "text": " And today, I am going to ask you to imagine a future where we can all have our toy jensen's"}, {"start": 84.28, "end": 88.64, "text": " or our own virtual assistants with our own voice."}, {"start": 88.64, "end": 89.8, "text": " All of us."}, {"start": 89.8, "end": 91.84, "text": " That sounds really cool."}, {"start": 91.84, "end": 94.24000000000001, "text": " So is that in the far future?"}, {"start": 94.24000000000001, "end": 96.28, "text": " No, not at all."}, {"start": 96.28, "end": 102.96000000000001, "text": " Today, I have the amazing opportunity to show you a bit more about the AI that makes this"}, {"start": 102.96000000000001, "end": 105.16, "text": " voice synthesis happen."}, {"start": 105.16, "end": 111.28, "text": " And yes, you will hear things that are only available here at two minute papers."}, {"start": 111.28, "end": 113.68, "text": " So what is all this about?"}, {"start": 113.68, "end": 120.28, "text": " This work is an AI-based technique that takes samples of our voice and can then clone it."}, {"start": 120.28, "end": 122.2, "text": " Let's give it a try."}, {"start": 122.2, "end": 127.2, "text": " This is Jamil from NVIDIA who was kind enough to record these voice snippets."}, {"start": 127.2, "end": 128.20000000000002, "text": " Listen."}, {"start": 128.20000000000002, "end": 130.48000000000002, "text": " I think they have to change that."}, {"start": 130.48000000000002, "end": 132.44, "text": " Further details are expected later."}, {"start": 132.44, "end": 137.24, "text": " Okay, so how much of this do we need to train the AI?"}, {"start": 137.24, "end": 143.44, "text": " Well, not the entire life recordings of the test subject, but much less."}, {"start": 143.44, "end": 146.44, "text": " See 30 minutes of these voice samples."}, {"start": 146.44, "end": 153.8, "text": " The technique asks us to see these sentences and analyzes the timbre, prosody and the rhythm"}, {"start": 153.8, "end": 157.2, "text": " of our voice, which is quite a task."}, {"start": 157.2, "end": 159.24, "text": " And what can it do afterwards?"}, {"start": 159.24, "end": 167.48, "text": " Well, hold on to your papers because Jamil, not the real one, the clone AI Jamil has a scholarly"}, {"start": 167.48, "end": 169.8, "text": " message for you that I wrote."}, {"start": 169.8, "end": 172.4, "text": " This is a voice line generated by an AI."}, {"start": 172.4, "end": 176.64000000000001, "text": " Here if you fell as scholars are going to notice, the first law of papers says that research"}, {"start": 176.64000000000001, "end": 177.64000000000001, "text": " is a process."}, {"start": 177.64000000000001, "end": 179.32, "text": " Do not look at where we are."}, {"start": 179.32, "end": 181.92000000000002, "text": " Look at where we will be two more papers down the line."}, {"start": 181.92000000000002, "end": 187.28, "text": " The third law of papers says that a bad researcher fails 100% of the time, while a good one only"}, {"start": 187.28, "end": 189.68, "text": " fails 99% of the time."}, {"start": 189.68, "end": 194.20000000000002, "text": " Hence, what you see here is always just 1% of the work that was done."}, {"start": 194.20000000000002, "end": 197.16, "text": " So what do you think about these voice lines?"}, {"start": 197.16, "end": 198.16, "text": " Are they real?"}, {"start": 198.16, "end": 199.92000000000002, "text": " Or are they synthesized?"}, {"start": 199.92, "end": 202.32, "text": " Well, of course, it is not perfect."}, {"start": 202.32, "end": 207.95999999999998, "text": " I think that most of you are able to tell that these voice lines were synthesized, but"}, {"start": 207.95999999999998, "end": 214.95999999999998, "text": " my opinion is that these are easily good enough for a helpful human-like virtual assistant."}, {"start": 214.95999999999998, "end": 217.88, "text": " And really, how cool is that?"}, {"start": 217.88, "end": 222.76, "text": " Cloning a human voice from half an hour worth of sound samples."}, {"start": 222.76, "end": 224.56, "text": " What a time to be alive."}, {"start": 224.56, "end": 230.48, "text": " Note that I have been really tough with them, these are some long scholarly sentences that"}, {"start": 230.48, "end": 233.64000000000001, "text": " would give a challenge to any of these algorithms."}, {"start": 233.64000000000001, "end": 240.12, "text": " Now, if you are one of our earlier fellow scholars, you might remember that a few years ago,"}, {"start": 240.12, "end": 246.28, "text": " we talked about an AI technique called Tachotron, which can perform voice cloning from a really"}, {"start": 246.28, "end": 248.52, "text": " short few second-long sample."}, {"start": 248.52, "end": 254.24, "text": " I have only heard of simple shorter sentences for that, and what is new here is that this"}, {"start": 254.24, "end": 260.68, "text": " new technique takes more data, but in return offers higher quality."}, {"start": 260.68, "end": 262.36, "text": " But it doesn't stop there."}, {"start": 262.36, "end": 264.16, "text": " It does even more."}, {"start": 264.16, "end": 270.52, "text": " This new method is easier to train, and it also generalizes to more languages better."}, {"start": 270.52, "end": 276.64, "text": " And these are already good enough to be used in real products, so I wonder what a technique"}, {"start": 276.64, "end": 280.56, "text": " like this will look like just two more papers down the line."}, {"start": 280.56, "end": 286.68, "text": " Maybe my voice here on two-minute papers could be synthesized by an AI."}, {"start": 286.68, "end": 288.72, "text": " Maybe it already is."}, {"start": 288.72, "end": 290.56, "text": " Would that be a good thing?"}, {"start": 290.56, "end": 291.56, "text": " What do you think?"}, {"start": 291.56, "end": 296.84000000000003, "text": " Also a virtual caroy for you to read your daily dose of papers."}, {"start": 296.84000000000003, "end": 303.04, "text": " Hmm, actually, since Nvidia has a great track record of putting these amazing tools into"}, {"start": 303.04, "end": 308.88, "text": " everyone's hands if you are interested, there is already an early access program where"}, {"start": 308.88, "end": 310.04, "text": " you can apply."}, {"start": 310.04, "end": 315.92, "text": " I hope that some of you will be able to try this and let us know in the comments below."}, {"start": 315.92, "end": 319.48, "text": " This episode has been supported by CoHear AI."}, {"start": 319.48, "end": 325.20000000000005, "text": " CoHear builds large language models and makes them available through an API so businesses"}, {"start": 325.20000000000005, "end": 332.0, "text": " can add advanced language understanding to their system or app quickly with just one line"}, {"start": 332.0, "end": 333.24, "text": " of code."}, {"start": 333.24, "end": 339.08000000000004, "text": " You can use your own data, whether it's text from customer service requests, legal contracts,"}, {"start": 339.08, "end": 347.08, "text": " or social media posts to create your own custom models to understand text or even generated."}, {"start": 347.08, "end": 352.2, "text": " For instance, it can be used to automatically determine whether your messages are about"}, {"start": 352.2, "end": 359.68, "text": " your business hours, returns, or shipping, or it can be used to generate a list of possible"}, {"start": 359.68, "end": 363.24, "text": " sentences you can use for your product descriptions."}, {"start": 363.24, "end": 368.96, "text": " Make sure to go to CoHear.ai slash papers or click the link in the video description"}, {"start": 368.96, "end": 371.47999999999996, "text": " and give it a try today."}, {"start": 371.47999999999996, "end": 372.84, "text": " It's super easy to use."}, {"start": 372.84, "end": 401.47999999999996, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YxmAQiiHOkA
Google’s Video AI: Outrageously Good! 🤖
❤️ Check out Runway and try it for free here: https://runwayml.com/papers/ Use the code TWOMINUTE at checkout to get 10% off! 📝 The paper "High Definition Video Generation with Diffusion Models" is available here: https://imagen.research.google/video/ 📝 My paper "The flow from simulation to reality" with is available here for free: - Free version: https://rdcu.be/cWPfD - Orig. Nature link - https://www.nature.com/articles/s41567-022-01788-5 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 - Teaser 0:15 - Text to image 0:37 - Text to video? 1:07 - It is really here! 1:45 - First example 2:48 - Second example 3:48 - Simulation or reality? 4:20 - Third example 5:08 - How long did this take? 5:48 - Failure cases 6:10 - More beautiful examples 6:21 - Looking under the hood 7:00 - Even more results Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #imagen
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. I cannot believe that this paper is here. This is unbelievable. So, what is going on here? Yes, that's right. We know that these modern AI programs can paint images for us. Anything we wish, but today we are going to find out whether they can also do it with video. You see an example here... And here are these also made by an AI? Well, I'll tell you in a moment. So, video. That sounds impossible. That is so much harder. You see, videos require a much greater understanding of the world around us, so much more computation, and my favorite, temporal coherence. What is that? This means that a video is not just a set of images, but a series of images that have to relate to each other. If the AI does not do a good job at this, we get this flickering. So, as all of this is so hard, we will be able to do this maybe in 5-10 years or maybe never. Well, scientists at Google say not so fast. Now, hold onto your papers and have a look at this. Oh my goodness. Is it really here? I am utterly shocked, but the answer is yes. Yes it is. So now, let's have a look at 3 of my favorite examples, and then I'll tell you how much time this took. By the way, it is an almost unfathomably short time. Now one, the concept is the same. One simple text prompt goes in, for instance, a happy elephant wearing a birthday hat, walking under the sea, and this comes out. Wow, look at that. That is exactly what we were asking for in the prompt, plus as I am a light transport researcher by trade, I am also looking at the waves and the sky through the sea, which is absolutely beautiful. But it doesn't stop there. I also see every light transport researcher's dream there. Water, caustics, look at these gorgeous patterns. Now not even this technique is perfect, you see that temporal coherence is still subject to improvement, the video still flickers a tiny bit, and the task is also changing over time. However, this is incredible progress in so little time. Absolutely amazing. Two, in good two minute papers fashion, now let's ask for a bit of physics, a bunch of autumn leaves falling on a calm lake, forming the text, image and video. I love it. You see, in computer graphics, creating a simulation like this would take quite a bit of 3D modeling knowledge, and then we also have to fire up a fluid simulation. This does not seem to do a great deal of two way coupling, which means that the water hasn't affect on the leaves, you see it at acting this leaf here, but the leaves do not seem to have a huge effect on the water itself. This is possible with specialized computer graphics algorithms, like this one, and I bet it will also be possible with image and video version 2. Now I am super happy to see the reflections of the leaves appearing on the water. Good job little AI, and to think that this is just the first iteration of image and video, wow! By the way, if you wish to see how detailed a real physics simulation can be, make sure to check out my Nature Physics comment paper in the video description. Spoiler alert, the surprising answer is that they can be almost as detailed as real life. I was also very happy with this splash, and with this turquoise liquid movement in the glass too. Great simulations on version 1. I am so happy. Now 3, give me a teddy bear doing the dishes. Whoa! Is this real? Yes it is. It really feels like we are living inside a science fiction movie. Now it's not perfect, you can see that it is a little confused by the interaction of these objects, but if someone told me just a few weeks ago that an AI would be able to do this, I wouldn't have believed a word of it. And it not only has a really good understanding of reality, but it can also combine two previous concepts, a teddy bear and washing the dishes into something new. My goodness, I love it. Now while we look at some more beautiful results, we know that this is incredible progress in so little time. But how little exactly? Well, if you have been holding onto your paper so far, now squeeze that paper because OpenAI's Dolly2 text to image AI appeared in April 2022, Google's image also text to image appears one month later, May 2022, that is incredible. And get this only 5 months later by October 2022, we get this. An amazing text to video AI. I am out of words. Of course, it is not perfect. The hair of pets is typically still a problem and the complexity of this ship battle is still a little too much for it to shoulder, so version 1 is not going to make a new Pirates of the Caribbean movie, but maybe version 3, 2 more papers down the line, who knows. Ah yes, about that. The resolution of these videos is not too bad at all. It is in 720p, the literature likes to call it high definition. These are not 4k, like many of the shows you can watch on your TV today, but this quality for a first crack at the problem is simply stunning. And don't forget that first it synthesizes a low resolution video, then abscales it through super resolution, something Google is already really good at, so I would not be surprised for version 2 to easily go to full HD and maybe even beyond. As you see, the pace of progress in AI research is nothing short of amazing. And if like me, you are yearning for some more results, you can check out the papers website in the video description, where as of the making of this video, you get a random selection of results. Refresh it a couple times and see if you get something new. And if I could somehow get access to this technique, you bet that I'd be generating a ton more of these. Update I cannot make any promises, but good news we are already working on it. A video of a scholar reading exploding papers absolutely needs to happen. Make sure to subscribe and hit the bell icon to not miss it in case it happens. You really don't want to miss that. So from now on, if you are wondering what a wooden figurine surfing in outer space looks like, you need to look no further. What a time to be alive. So what do you think? Does this get your mind going? What would you use this for? Let me know in the comments below. This episode has been supported by Ranway, Professional and Magical AI video editing for everyone. And often here you follow scholars asking, OK, these AI techniques look great, but when do I get to use them? And the answer is, right now, Ranway is an amazing video editor that can do many of the things that you see here in this series. For instance, it can automatically replace the background behind the person. It can do in painting for videos amazingly well. It can do even text to image, image to image, you name it. No wonder it is used by editors, post production teams and creators at companies like CBS, Google, Vox and many other. Make sure to go to RanwayML.com, slash papers, sign up and try it for free today. And here comes the best part. Use the code 2 minute at checkout and get 10% off your first month. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 4.64, "end": 8.0, "text": " I cannot believe that this paper is here."}, {"start": 8.0, "end": 10.24, "text": " This is unbelievable."}, {"start": 10.24, "end": 12.48, "text": " So, what is going on here?"}, {"start": 12.48, "end": 14.0, "text": " Yes, that's right."}, {"start": 14.0, "end": 18.48, "text": " We know that these modern AI programs can paint images for us."}, {"start": 18.48, "end": 26.080000000000002, "text": " Anything we wish, but today we are going to find out whether they can also do it with video."}, {"start": 26.080000000000002, "end": 28.0, "text": " You see an example here..."}, {"start": 28.0, "end": 33.12, "text": " And here are these also made by an AI?"}, {"start": 33.12, "end": 35.76, "text": " Well, I'll tell you in a moment."}, {"start": 35.76, "end": 37.6, "text": " So, video."}, {"start": 37.6, "end": 39.6, "text": " That sounds impossible."}, {"start": 39.6, "end": 41.76, "text": " That is so much harder."}, {"start": 41.76, "end": 48.8, "text": " You see, videos require a much greater understanding of the world around us, so much more computation,"}, {"start": 48.8, "end": 51.84, "text": " and my favorite, temporal coherence."}, {"start": 51.84, "end": 53.6, "text": " What is that?"}, {"start": 53.6, "end": 60.72, "text": " This means that a video is not just a set of images, but a series of images that have to relate"}, {"start": 60.72, "end": 61.72, "text": " to each other."}, {"start": 61.72, "end": 67.52, "text": " If the AI does not do a good job at this, we get this flickering."}, {"start": 67.52, "end": 76.0, "text": " So, as all of this is so hard, we will be able to do this maybe in 5-10 years or maybe"}, {"start": 76.0, "end": 77.0, "text": " never."}, {"start": 77.0, "end": 80.8, "text": " Well, scientists at Google say not so fast."}, {"start": 80.8, "end": 85.12, "text": " Now, hold onto your papers and have a look at this."}, {"start": 85.12, "end": 87.44, "text": " Oh my goodness."}, {"start": 87.44, "end": 88.64, "text": " Is it really here?"}, {"start": 88.64, "end": 92.39999999999999, "text": " I am utterly shocked, but the answer is yes."}, {"start": 92.39999999999999, "end": 93.6, "text": " Yes it is."}, {"start": 93.6, "end": 99.88, "text": " So now, let's have a look at 3 of my favorite examples, and then I'll tell you how much"}, {"start": 99.88, "end": 101.56, "text": " time this took."}, {"start": 101.56, "end": 105.92, "text": " By the way, it is an almost unfathomably short time."}, {"start": 105.92, "end": 109.4, "text": " Now one, the concept is the same."}, {"start": 109.4, "end": 115.68, "text": " One simple text prompt goes in, for instance, a happy elephant wearing a birthday hat,"}, {"start": 115.68, "end": 119.4, "text": " walking under the sea, and this comes out."}, {"start": 119.4, "end": 121.96000000000001, "text": " Wow, look at that."}, {"start": 121.96000000000001, "end": 128.04000000000002, "text": " That is exactly what we were asking for in the prompt, plus as I am a light transport researcher"}, {"start": 128.04000000000002, "end": 135.12, "text": " by trade, I am also looking at the waves and the sky through the sea, which is absolutely"}, {"start": 135.12, "end": 136.12, "text": " beautiful."}, {"start": 136.12, "end": 138.32, "text": " But it doesn't stop there."}, {"start": 138.32, "end": 142.84, "text": " I also see every light transport researcher's dream there."}, {"start": 142.84, "end": 146.56, "text": " Water, caustics, look at these gorgeous patterns."}, {"start": 146.56, "end": 152.48, "text": " Now not even this technique is perfect, you see that temporal coherence is still subject"}, {"start": 152.48, "end": 159.56, "text": " to improvement, the video still flickers a tiny bit, and the task is also changing over"}, {"start": 159.56, "end": 160.56, "text": " time."}, {"start": 160.56, "end": 165.48, "text": " However, this is incredible progress in so little time."}, {"start": 165.48, "end": 166.48, "text": " Absolutely amazing."}, {"start": 166.48, "end": 173.32, "text": " Two, in good two minute papers fashion, now let's ask for a bit of physics, a bunch of"}, {"start": 173.32, "end": 178.64, "text": " autumn leaves falling on a calm lake, forming the text, image and video."}, {"start": 178.64, "end": 180.07999999999998, "text": " I love it."}, {"start": 180.07999999999998, "end": 186.64, "text": " You see, in computer graphics, creating a simulation like this would take quite a bit of 3D modeling"}, {"start": 186.64, "end": 191.6, "text": " knowledge, and then we also have to fire up a fluid simulation."}, {"start": 191.6, "end": 196.56, "text": " This does not seem to do a great deal of two way coupling, which means that the water hasn't"}, {"start": 196.56, "end": 203.0, "text": " affect on the leaves, you see it at acting this leaf here, but the leaves do not seem to"}, {"start": 203.0, "end": 206.2, "text": " have a huge effect on the water itself."}, {"start": 206.2, "end": 211.68, "text": " This is possible with specialized computer graphics algorithms, like this one, and I bet"}, {"start": 211.68, "end": 216.28, "text": " it will also be possible with image and video version 2."}, {"start": 216.28, "end": 222.4, "text": " Now I am super happy to see the reflections of the leaves appearing on the water."}, {"start": 222.4, "end": 229.4, "text": " Good job little AI, and to think that this is just the first iteration of image and video,"}, {"start": 229.4, "end": 230.4, "text": " wow!"}, {"start": 230.4, "end": 235.96, "text": " By the way, if you wish to see how detailed a real physics simulation can be, make sure"}, {"start": 235.96, "end": 240.76, "text": " to check out my Nature Physics comment paper in the video description."}, {"start": 240.76, "end": 247.0, "text": " Spoiler alert, the surprising answer is that they can be almost as detailed as real"}, {"start": 247.0, "end": 248.0, "text": " life."}, {"start": 248.0, "end": 254.12, "text": " I was also very happy with this splash, and with this turquoise liquid movement in the glass"}, {"start": 254.12, "end": 255.12, "text": " too."}, {"start": 255.12, "end": 257.68, "text": " Great simulations on version 1."}, {"start": 257.68, "end": 259.71999999999997, "text": " I am so happy."}, {"start": 259.71999999999997, "end": 264.24, "text": " Now 3, give me a teddy bear doing the dishes."}, {"start": 264.24, "end": 265.24, "text": " Whoa!"}, {"start": 265.24, "end": 266.56, "text": " Is this real?"}, {"start": 266.56, "end": 267.56, "text": " Yes it is."}, {"start": 267.56, "end": 272.52, "text": " It really feels like we are living inside a science fiction movie."}, {"start": 272.52, "end": 278.4, "text": " Now it's not perfect, you can see that it is a little confused by the interaction of"}, {"start": 278.4, "end": 284.76, "text": " these objects, but if someone told me just a few weeks ago that an AI would be able to"}, {"start": 284.76, "end": 288.8, "text": " do this, I wouldn't have believed a word of it."}, {"start": 288.8, "end": 295.68, "text": " And it not only has a really good understanding of reality, but it can also combine two previous"}, {"start": 295.68, "end": 301.6, "text": " concepts, a teddy bear and washing the dishes into something new."}, {"start": 301.6, "end": 304.72, "text": " My goodness, I love it."}, {"start": 304.72, "end": 311.08, "text": " Now while we look at some more beautiful results, we know that this is incredible progress"}, {"start": 311.08, "end": 313.4, "text": " in so little time."}, {"start": 313.4, "end": 315.24, "text": " But how little exactly?"}, {"start": 315.24, "end": 321.2, "text": " Well, if you have been holding onto your paper so far, now squeeze that paper because"}, {"start": 321.2, "end": 329.76, "text": " OpenAI's Dolly2 text to image AI appeared in April 2022, Google's image also text to"}, {"start": 329.76, "end": 336.52, "text": " image appears one month later, May 2022, that is incredible."}, {"start": 336.52, "end": 343.76, "text": " And get this only 5 months later by October 2022, we get this."}, {"start": 343.76, "end": 346.88, "text": " An amazing text to video AI."}, {"start": 346.88, "end": 349.2, "text": " I am out of words."}, {"start": 349.2, "end": 351.12, "text": " Of course, it is not perfect."}, {"start": 351.12, "end": 357.84, "text": " The hair of pets is typically still a problem and the complexity of this ship battle is still"}, {"start": 357.84, "end": 363.56, "text": " a little too much for it to shoulder, so version 1 is not going to make a new Pirates of the"}, {"start": 363.56, "end": 370.24, "text": " Caribbean movie, but maybe version 3, 2 more papers down the line, who knows."}, {"start": 370.24, "end": 372.15999999999997, "text": " Ah yes, about that."}, {"start": 372.15999999999997, "end": 375.68, "text": " The resolution of these videos is not too bad at all."}, {"start": 375.68, "end": 381.32, "text": " It is in 720p, the literature likes to call it high definition."}, {"start": 381.32, "end": 387.32, "text": " These are not 4k, like many of the shows you can watch on your TV today, but this quality"}, {"start": 387.32, "end": 391.36, "text": " for a first crack at the problem is simply stunning."}, {"start": 391.36, "end": 397.96000000000004, "text": " And don't forget that first it synthesizes a low resolution video, then abscales it through"}, {"start": 397.96000000000004, "end": 404.24, "text": " super resolution, something Google is already really good at, so I would not be surprised"}, {"start": 404.24, "end": 410.72, "text": " for version 2 to easily go to full HD and maybe even beyond."}, {"start": 410.72, "end": 415.92, "text": " As you see, the pace of progress in AI research is nothing short of amazing."}, {"start": 415.92, "end": 421.24, "text": " And if like me, you are yearning for some more results, you can check out the papers website"}, {"start": 421.24, "end": 427.0, "text": " in the video description, where as of the making of this video, you get a random selection"}, {"start": 427.0, "end": 428.0, "text": " of results."}, {"start": 428.0, "end": 432.84000000000003, "text": " Refresh it a couple times and see if you get something new."}, {"start": 432.84, "end": 438.08, "text": " And if I could somehow get access to this technique, you bet that I'd be generating"}, {"start": 438.08, "end": 439.88, "text": " a ton more of these."}, {"start": 439.88, "end": 445.79999999999995, "text": " Update I cannot make any promises, but good news we are already working on it."}, {"start": 445.79999999999995, "end": 451.84, "text": " A video of a scholar reading exploding papers absolutely needs to happen."}, {"start": 451.84, "end": 456.91999999999996, "text": " Make sure to subscribe and hit the bell icon to not miss it in case it happens."}, {"start": 456.91999999999996, "end": 458.84, "text": " You really don't want to miss that."}, {"start": 458.84, "end": 464.79999999999995, "text": " So from now on, if you are wondering what a wooden figurine surfing in outer space looks"}, {"start": 464.79999999999995, "end": 467.79999999999995, "text": " like, you need to look no further."}, {"start": 467.79999999999995, "end": 469.59999999999997, "text": " What a time to be alive."}, {"start": 469.59999999999997, "end": 471.35999999999996, "text": " So what do you think?"}, {"start": 471.35999999999996, "end": 472.96, "text": " Does this get your mind going?"}, {"start": 472.96, "end": 474.84, "text": " What would you use this for?"}, {"start": 474.84, "end": 476.64, "text": " Let me know in the comments below."}, {"start": 476.64, "end": 483.12, "text": " This episode has been supported by Ranway, Professional and Magical AI video editing for"}, {"start": 483.12, "end": 484.12, "text": " everyone."}, {"start": 484.12, "end": 490.72, "text": " And often here you follow scholars asking, OK, these AI techniques look great, but when"}, {"start": 490.72, "end": 492.44, "text": " do I get to use them?"}, {"start": 492.44, "end": 498.44, "text": " And the answer is, right now, Ranway is an amazing video editor that can do many of the"}, {"start": 498.44, "end": 501.32, "text": " things that you see here in this series."}, {"start": 501.32, "end": 506.48, "text": " For instance, it can automatically replace the background behind the person."}, {"start": 506.48, "end": 510.84000000000003, "text": " It can do in painting for videos amazingly well."}, {"start": 510.84, "end": 515.64, "text": " It can do even text to image, image to image, you name it."}, {"start": 515.64, "end": 523.0799999999999, "text": " No wonder it is used by editors, post production teams and creators at companies like CBS, Google,"}, {"start": 523.0799999999999, "end": 525.28, "text": " Vox and many other."}, {"start": 525.28, "end": 532.68, "text": " Make sure to go to RanwayML.com, slash papers, sign up and try it for free today."}, {"start": 532.68, "end": 534.76, "text": " And here comes the best part."}, {"start": 534.76, "end": 540.24, "text": " Use the code 2 minute at checkout and get 10% off your first month."}, {"start": 540.24, "end": 542.24, "text": " Thanks for watching and for your generous support."}, {"start": 542.24, "end": 571.24, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Ybk8hxKeMYQ
Google’s New Robot: Don't Mess With This Guy! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Inner Monologue: Embodied Reasoning through Planning with Language Models" is available here: https://innermonologue.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #google #ai
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to give a really hard time to a robot. Look at this and this. You see, Google's robots are getting smarter and smarter every year. For instance, we talked about this robot assistant where we can say, please help me, I have spilled the coke and then it creates a cunning plan. It tries to throw out the coke and, well, almost. Then it brings us a sponge. And if we've been reading research papers all day and we feel a little tired, if we tell it, it can bring us a water bottle, hand it to us, and can even bring us an apple. A different robot of theirs has learned to drive itself, also understands English, and can thus take instructions from us and find its way around. And these instructions can be really nasty, like this one. However, there is one thing that neither of these previous robots can do, but this new one can. Let's see what it is. First, let's ask for a soda again, and once again, it comes up with a cunning plan. Go to the coke can, pick it up, and checkmate little robot. Now comes the coolest part. It realizes that it has failed, and now change of plans. It says the following. There is no coke, but there is an orange soda. Is that okay with us? No, no, we are going to be a little picky here, and say no, and ask for a lime soda instead. The robot probably thinks, oh goodness, a change of plans again, let's look for that lime soda. And it is, of course, really far away to give it a hard time, so let's see what it does. Wow, look at that. It found it in the other end of the room, recognized that this is indeed the soda, picks it up, and we are done. So cool, the amenities at the Google headquarters are truly next level. I love it. This was super fun, so you know what? Let's try another task. Let's ask it to put the coke can into the top drawer. Will it succeed? Well, look at that. The human operator cannot wait to mess with this little robot, and, aha, sure enough, that drawer is not opening. So is this a problem? Well, the robot recognized that this was not successful, but now the environment has changed, the pesky human is gone, so it tries again. And this time the drawer opens, in goes the coke can, it holds the drawer with both fingers, and now I just need a gentle little push and bravo, good job. We tried to confuse this robot by messing up the environment, and it did really well, but now get this. What if we mess with its brain instead? How? Well, check this out. Let's ask it to throw away the snack on the counter. It asks which one and to which our answer is, of course, not the apple. No, no, let's mess with it a little more. Our answer is that we changed our mind, so please throw away something on the table instead. Okay, now as it approaches the table, let's try to mess with it again. You know what? Never mind, just finish the previous task instead. And it finally goes there and grabs the apple. We tried really hard, but we could not mess with this guy. So cool. So, a new robot that understands English can move around, make a plan, and most importantly, it can come up with a plan B when plan A fails. Wow, a little personal assistant. The pace of progress in AR research is nothing short of amazing. What a time to be alive. So what would you use this for? Let me know in the comments below. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance and hold on to your papers because with Lambda GPU cloud you can get on demand A 100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 9.64, "text": " Today we are going to give a really hard time to a robot."}, {"start": 9.64, "end": 12.76, "text": " Look at this and this."}, {"start": 12.76, "end": 17.96, "text": " You see, Google's robots are getting smarter and smarter every year."}, {"start": 17.96, "end": 23.78, "text": " For instance, we talked about this robot assistant where we can say, please help me, I have"}, {"start": 23.78, "end": 28.400000000000002, "text": " spilled the coke and then it creates a cunning plan."}, {"start": 28.4, "end": 32.96, "text": " It tries to throw out the coke and, well, almost."}, {"start": 32.96, "end": 37.08, "text": " Then it brings us a sponge."}, {"start": 37.08, "end": 43.239999999999995, "text": " And if we've been reading research papers all day and we feel a little tired, if we"}, {"start": 43.239999999999995, "end": 50.16, "text": " tell it, it can bring us a water bottle, hand it to us, and can even bring us an apple."}, {"start": 50.16, "end": 56.76, "text": " A different robot of theirs has learned to drive itself, also understands English, and"}, {"start": 56.76, "end": 62.04, "text": " can thus take instructions from us and find its way around."}, {"start": 62.04, "end": 66.48, "text": " And these instructions can be really nasty, like this one."}, {"start": 66.48, "end": 72.8, "text": " However, there is one thing that neither of these previous robots can do, but this new"}, {"start": 72.8, "end": 73.8, "text": " one can."}, {"start": 73.8, "end": 75.52, "text": " Let's see what it is."}, {"start": 75.52, "end": 83.12, "text": " First, let's ask for a soda again, and once again, it comes up with a cunning plan."}, {"start": 83.12, "end": 89.32000000000001, "text": " Go to the coke can, pick it up, and checkmate little robot."}, {"start": 89.32000000000001, "end": 91.52000000000001, "text": " Now comes the coolest part."}, {"start": 91.52000000000001, "end": 96.04, "text": " It realizes that it has failed, and now change of plans."}, {"start": 96.04, "end": 97.84, "text": " It says the following."}, {"start": 97.84, "end": 102.44, "text": " There is no coke, but there is an orange soda."}, {"start": 102.44, "end": 104.08000000000001, "text": " Is that okay with us?"}, {"start": 104.08000000000001, "end": 112.0, "text": " No, no, we are going to be a little picky here, and say no, and ask for a lime soda instead."}, {"start": 112.0, "end": 118.2, "text": " The robot probably thinks, oh goodness, a change of plans again, let's look for that lime"}, {"start": 118.2, "end": 119.2, "text": " soda."}, {"start": 119.2, "end": 126.32, "text": " And it is, of course, really far away to give it a hard time, so let's see what it does."}, {"start": 126.32, "end": 129.12, "text": " Wow, look at that."}, {"start": 129.12, "end": 136.12, "text": " It found it in the other end of the room, recognized that this is indeed the soda, picks it up,"}, {"start": 136.12, "end": 138.12, "text": " and we are done."}, {"start": 138.12, "end": 143.76, "text": " So cool, the amenities at the Google headquarters are truly next level."}, {"start": 143.76, "end": 145.56, "text": " I love it."}, {"start": 145.56, "end": 148.68, "text": " This was super fun, so you know what?"}, {"start": 148.68, "end": 150.52, "text": " Let's try another task."}, {"start": 150.52, "end": 154.72, "text": " Let's ask it to put the coke can into the top drawer."}, {"start": 154.72, "end": 156.16, "text": " Will it succeed?"}, {"start": 156.16, "end": 158.36, "text": " Well, look at that."}, {"start": 158.36, "end": 166.16, "text": " The human operator cannot wait to mess with this little robot, and, aha, sure enough,"}, {"start": 166.16, "end": 168.72, "text": " that drawer is not opening."}, {"start": 168.72, "end": 170.48, "text": " So is this a problem?"}, {"start": 170.48, "end": 178.0, "text": " Well, the robot recognized that this was not successful, but now the environment has changed,"}, {"start": 178.0, "end": 182.32, "text": " the pesky human is gone, so it tries again."}, {"start": 182.32, "end": 190.0, "text": " And this time the drawer opens, in goes the coke can, it holds the drawer with both fingers,"}, {"start": 190.0, "end": 197.0, "text": " and now I just need a gentle little push and bravo, good job."}, {"start": 197.0, "end": 203.12, "text": " We tried to confuse this robot by messing up the environment, and it did really well,"}, {"start": 203.12, "end": 205.12, "text": " but now get this."}, {"start": 205.12, "end": 207.96, "text": " What if we mess with its brain instead?"}, {"start": 207.96, "end": 208.96, "text": " How?"}, {"start": 208.96, "end": 210.6, "text": " Well, check this out."}, {"start": 210.6, "end": 214.2, "text": " Let's ask it to throw away the snack on the counter."}, {"start": 214.2, "end": 220.16, "text": " It asks which one and to which our answer is, of course, not the apple."}, {"start": 220.16, "end": 223.28, "text": " No, no, let's mess with it a little more."}, {"start": 223.28, "end": 230.51999999999998, "text": " Our answer is that we changed our mind, so please throw away something on the table instead."}, {"start": 230.51999999999998, "end": 235.88, "text": " Okay, now as it approaches the table, let's try to mess with it again."}, {"start": 235.88, "end": 236.88, "text": " You know what?"}, {"start": 236.88, "end": 240.64, "text": " Never mind, just finish the previous task instead."}, {"start": 240.64, "end": 244.12, "text": " And it finally goes there and grabs the apple."}, {"start": 244.12, "end": 248.32, "text": " We tried really hard, but we could not mess with this guy."}, {"start": 248.32, "end": 249.32, "text": " So cool."}, {"start": 249.32, "end": 257.2, "text": " So, a new robot that understands English can move around, make a plan, and most importantly,"}, {"start": 257.2, "end": 261.72, "text": " it can come up with a plan B when plan A fails."}, {"start": 261.72, "end": 265.04, "text": " Wow, a little personal assistant."}, {"start": 265.04, "end": 269.4, "text": " The pace of progress in AR research is nothing short of amazing."}, {"start": 269.4, "end": 271.28000000000003, "text": " What a time to be alive."}, {"start": 271.28000000000003, "end": 273.84000000000003, "text": " So what would you use this for?"}, {"start": 273.84, "end": 275.47999999999996, "text": " Let me know in the comments below."}, {"start": 275.47999999999996, "end": 282.44, "text": " If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices"}, {"start": 282.44, "end": 285.91999999999996, "text": " in the world for GPU cloud compute."}, {"start": 285.91999999999996, "end": 288.84, "text": " No commitments or negotiation required."}, {"start": 288.84, "end": 295.44, "text": " Just sign up and launch an instance and hold on to your papers because with Lambda GPU"}, {"start": 295.44, "end": 304.84, "text": " cloud you can get on demand A 100 instances for $1.10 per hour versus $4.10 per hour with"}, {"start": 304.84, "end": 305.84, "text": " AWS."}, {"start": 305.84, "end": 309.04, "text": " That's 73% savings."}, {"start": 309.04, "end": 312.52, "text": " Did I mention they also offer persistent storage?"}, {"start": 312.52, "end": 320.76, "text": " So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances,"}, {"start": 320.76, "end": 323.08, "text": " workstations or servers."}, {"start": 323.08, "end": 330.08, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 330.08, "end": 331.08, "text": " today."}, {"start": 331.08, "end": 359.08, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=VxbTiuabW0k
Intel’s New AI: Amazing Ray Tracing Results! ☀️
❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The paper "Temporally Stable Real-Time Joint Neural Denoising and Supersampling" is available here: https://www.intel.com/content/www/us/en/developer/articles/technical/temporally-stable-denoising-and-supersampling.html 📝 Our earlier paper with the spheres scene that took 3 weeks: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. This is my happy episode. Why is that? Well, of course, because today we are talking about light transport simulations and in particular Intel's amazing new technique that can take this and make it into this. Wow, it can also take this and make it into this. My goodness, this is amazing. But wait a second, what is going on here? What are these noisy videos for? And why? Well, if we wish to create a truly gorgeous photorealistic scene in computer graphics, we usually reach out to a light transport simulation algorithm and then this happens. Oh no, we have noise. Tons of it. But why? Well, during the simulation we have to shoot millions and millions of light rays into the scene to estimate how much light is bouncing around and before we have simulated enough rays, the inaccuracies in our estimations show up as noise in these images. And this clears up over time, but it may take a long time. How do we know that? Well, have a look at the reference simulation footage for this paper. See, there is still some noise in here. I am sure this would clean up over time, but no one said that it would do so quickly. A video like this might still require hours to days to compute. For instance, this is from a previous paper that took three weeks to finish and it ran on multiple computers at the same time. So is all whole past for these beautiful photorealistic simulations? Well, not quite. Instead of waiting for hours or days, what if I told you that we can just wait for a small fraction of a second, about 10 milliseconds, and it will produce this. And then run a previous noise filtering technique that is specifically tailored for light transport simulations. And what do we get? Probably not much, right? I can barely tell what I should be seeing here. So let's see a previous method. Whoa, that is way better. I was barely able to guess what these are, but now we know. Great things. Great. So we don't have to wait for hours, today's for a simulated world to come alive in a video like this, just a few milliseconds, at least for the simulation, we don't know how long the noise filtering takes. And now hold on to your papers, because this was not today's papers result, I hope this one can do even better. And look, instead it can do this. Wow. This is so much better. And the result of the reference simulation for comparison, this is the one that takes forever to compute. Let's also have a look at the videos and compare them. This is the noisy input simulation. Wow. This is going to be hard. Now, the previous method. Yes, this is clearly better, but there is a problem. Do you see the problem? Oh yes, it smoothed out the noise, but it smoothed out the details too. Hence a lot of them are lost. So let's see what Intel's new method can do instead. Now we're talking. So much better. I absolutely love it. It is still not as sharp as the reference simulation, however, in some regions, depending on your taste, it might even be more pleasing to the eye than this reference. And it gets better. This technique does not only denoising, but upsampling too. This means that it is able to create a higher resolution image with more pixels than the input footage. Now get ready, one more comparison, and I'll tell you how long the noise filtering took. I wonder what it will do with this noisy mess. I have no idea what is going on here. And neither does this previous technique. And this is not some ancient technique. This previous method is the neural bilateral grid, a learning based method from just two years ago. And now have a look at this. My goodness, is this really possible? So much progress, just one more paper down the line. I absolutely love it. So good. So how long do we have to wait for an image like this? Still hours to days? Well, not at all. This orans not only in real time, it runs faster than real time. Yes, that means about 200 frames per second for the new noise filtering step. And remember, the light simulation part typically takes 4 to 12 milliseconds on these scenes. This is the noisy mess that we get. And just 5 milliseconds later we get this. I cannot believe it. Bravo. So real time light transport simulations from now on. Oh yes, sign me up right now. What a time to be alive. So what do you think? Let me know in the comments below. This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more. Make sure to visit wmb.me slash paper forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 5.0, "end": 7.0, "text": " This is my happy episode."}, {"start": 7.0, "end": 8.0, "text": " Why is that?"}, {"start": 8.0, "end": 16.0, "text": " Well, of course, because today we are talking about light transport simulations and in particular"}, {"start": 16.0, "end": 23.0, "text": " Intel's amazing new technique that can take this and make it into this."}, {"start": 23.0, "end": 29.0, "text": " Wow, it can also take this and make it into this."}, {"start": 29.0, "end": 33.0, "text": " My goodness, this is amazing."}, {"start": 33.0, "end": 36.0, "text": " But wait a second, what is going on here?"}, {"start": 36.0, "end": 38.0, "text": " What are these noisy videos for?"}, {"start": 38.0, "end": 39.0, "text": " And why?"}, {"start": 39.0, "end": 46.5, "text": " Well, if we wish to create a truly gorgeous photorealistic scene in computer graphics, we usually"}, {"start": 46.5, "end": 52.0, "text": " reach out to a light transport simulation algorithm and then this happens."}, {"start": 52.0, "end": 55.0, "text": " Oh no, we have noise."}, {"start": 55.0, "end": 57.0, "text": " Tons of it."}, {"start": 57.0, "end": 58.0, "text": " But why?"}, {"start": 58.0, "end": 64.1, "text": " Well, during the simulation we have to shoot millions and millions of light rays into the"}, {"start": 64.1, "end": 70.62, "text": " scene to estimate how much light is bouncing around and before we have simulated enough"}, {"start": 70.62, "end": 76.96000000000001, "text": " rays, the inaccuracies in our estimations show up as noise in these images."}, {"start": 76.96000000000001, "end": 82.44, "text": " And this clears up over time, but it may take a long time."}, {"start": 82.44, "end": 83.96000000000001, "text": " How do we know that?"}, {"start": 83.96, "end": 87.96, "text": " Well, have a look at the reference simulation footage for this paper."}, {"start": 87.96, "end": 91.88, "text": " See, there is still some noise in here."}, {"start": 91.88, "end": 98.67999999999999, "text": " I am sure this would clean up over time, but no one said that it would do so quickly."}, {"start": 98.67999999999999, "end": 103.72, "text": " A video like this might still require hours to days to compute."}, {"start": 103.72, "end": 110.72, "text": " For instance, this is from a previous paper that took three weeks to finish and it ran"}, {"start": 110.72, "end": 114.12, "text": " on multiple computers at the same time."}, {"start": 114.12, "end": 119.28, "text": " So is all whole past for these beautiful photorealistic simulations?"}, {"start": 119.28, "end": 121.52, "text": " Well, not quite."}, {"start": 121.52, "end": 127.48, "text": " Instead of waiting for hours or days, what if I told you that we can just wait for a small"}, {"start": 127.48, "end": 134.12, "text": " fraction of a second, about 10 milliseconds, and it will produce this."}, {"start": 134.12, "end": 140.28, "text": " And then run a previous noise filtering technique that is specifically tailored for light"}, {"start": 140.28, "end": 142.24, "text": " transport simulations."}, {"start": 142.24, "end": 143.92, "text": " And what do we get?"}, {"start": 143.92, "end": 146.16, "text": " Probably not much, right?"}, {"start": 146.16, "end": 149.44, "text": " I can barely tell what I should be seeing here."}, {"start": 149.44, "end": 152.08, "text": " So let's see a previous method."}, {"start": 152.08, "end": 155.56, "text": " Whoa, that is way better."}, {"start": 155.56, "end": 159.76, "text": " I was barely able to guess what these are, but now we know."}, {"start": 159.76, "end": 160.76, "text": " Great things."}, {"start": 160.76, "end": 161.76, "text": " Great."}, {"start": 161.76, "end": 167.64, "text": " So we don't have to wait for hours, today's for a simulated world to come alive in a video"}, {"start": 167.64, "end": 174.55999999999997, "text": " like this, just a few milliseconds, at least for the simulation, we don't know how long"}, {"start": 174.55999999999997, "end": 176.44, "text": " the noise filtering takes."}, {"start": 176.44, "end": 182.2, "text": " And now hold on to your papers, because this was not today's papers result, I hope this"}, {"start": 182.2, "end": 185.07999999999998, "text": " one can do even better."}, {"start": 185.07999999999998, "end": 188.48, "text": " And look, instead it can do this."}, {"start": 188.48, "end": 190.2, "text": " Wow."}, {"start": 190.2, "end": 192.64, "text": " This is so much better."}, {"start": 192.64, "end": 197.35999999999999, "text": " And the result of the reference simulation for comparison, this is the one that takes"}, {"start": 197.36, "end": 199.0, "text": " forever to compute."}, {"start": 199.0, "end": 203.32000000000002, "text": " Let's also have a look at the videos and compare them."}, {"start": 203.32000000000002, "end": 205.88000000000002, "text": " This is the noisy input simulation."}, {"start": 205.88000000000002, "end": 207.4, "text": " Wow."}, {"start": 207.4, "end": 209.16000000000003, "text": " This is going to be hard."}, {"start": 209.16000000000003, "end": 211.20000000000002, "text": " Now, the previous method."}, {"start": 211.20000000000002, "end": 216.32000000000002, "text": " Yes, this is clearly better, but there is a problem."}, {"start": 216.32000000000002, "end": 217.72000000000003, "text": " Do you see the problem?"}, {"start": 217.72000000000003, "end": 224.44000000000003, "text": " Oh yes, it smoothed out the noise, but it smoothed out the details too."}, {"start": 224.44000000000003, "end": 227.04000000000002, "text": " Hence a lot of them are lost."}, {"start": 227.04, "end": 232.23999999999998, "text": " So let's see what Intel's new method can do instead."}, {"start": 232.23999999999998, "end": 233.79999999999998, "text": " Now we're talking."}, {"start": 233.79999999999998, "end": 234.88, "text": " So much better."}, {"start": 234.88, "end": 237.64, "text": " I absolutely love it."}, {"start": 237.64, "end": 243.32, "text": " It is still not as sharp as the reference simulation, however, in some regions, depending"}, {"start": 243.32, "end": 248.88, "text": " on your taste, it might even be more pleasing to the eye than this reference."}, {"start": 248.88, "end": 250.28, "text": " And it gets better."}, {"start": 250.28, "end": 255.39999999999998, "text": " This technique does not only denoising, but upsampling too."}, {"start": 255.4, "end": 261.08, "text": " This means that it is able to create a higher resolution image with more pixels than the"}, {"start": 261.08, "end": 262.48, "text": " input footage."}, {"start": 262.48, "end": 269.36, "text": " Now get ready, one more comparison, and I'll tell you how long the noise filtering took."}, {"start": 269.36, "end": 273.2, "text": " I wonder what it will do with this noisy mess."}, {"start": 273.2, "end": 276.56, "text": " I have no idea what is going on here."}, {"start": 276.56, "end": 279.52, "text": " And neither does this previous technique."}, {"start": 279.52, "end": 282.0, "text": " And this is not some ancient technique."}, {"start": 282.0, "end": 287.56, "text": " This previous method is the neural bilateral grid, a learning based method from just two"}, {"start": 287.56, "end": 288.76, "text": " years ago."}, {"start": 288.76, "end": 291.68, "text": " And now have a look at this."}, {"start": 291.68, "end": 294.84, "text": " My goodness, is this really possible?"}, {"start": 294.84, "end": 298.48, "text": " So much progress, just one more paper down the line."}, {"start": 298.48, "end": 301.36, "text": " I absolutely love it."}, {"start": 301.36, "end": 302.36, "text": " So good."}, {"start": 302.36, "end": 306.36, "text": " So how long do we have to wait for an image like this?"}, {"start": 306.36, "end": 308.28, "text": " Still hours to days?"}, {"start": 308.28, "end": 310.08, "text": " Well, not at all."}, {"start": 310.08, "end": 316.28, "text": " This orans not only in real time, it runs faster than real time."}, {"start": 316.28, "end": 322.64, "text": " Yes, that means about 200 frames per second for the new noise filtering step."}, {"start": 322.64, "end": 329.56, "text": " And remember, the light simulation part typically takes 4 to 12 milliseconds on these scenes."}, {"start": 329.56, "end": 332.0, "text": " This is the noisy mess that we get."}, {"start": 332.0, "end": 336.15999999999997, "text": " And just 5 milliseconds later we get this."}, {"start": 336.15999999999997, "end": 338.52, "text": " I cannot believe it."}, {"start": 338.52, "end": 339.52, "text": " Bravo."}, {"start": 339.52, "end": 343.2, "text": " So real time light transport simulations from now on."}, {"start": 343.2, "end": 346.4, "text": " Oh yes, sign me up right now."}, {"start": 346.4, "end": 348.35999999999996, "text": " What a time to be alive."}, {"start": 348.35999999999996, "end": 350.15999999999997, "text": " So what do you think?"}, {"start": 350.15999999999997, "end": 352.03999999999996, "text": " Let me know in the comments below."}, {"start": 352.03999999999996, "end": 355.71999999999997, "text": " This video has been supported by weights and biases."}, {"start": 355.71999999999997, "end": 356.71999999999997, "text": " Look at this."}, {"start": 356.71999999999997, "end": 362.24, "text": " They have a great community forum that aims to make you the best machine learning engineer"}, {"start": 362.24, "end": 363.32, "text": " you can be."}, {"start": 363.32, "end": 368.24, "text": " You see, I always get messages from you fellow scholars telling me that you have been"}, {"start": 368.24, "end": 373.48, "text": " inspired by the series, but don't really know where to start."}, {"start": 373.48, "end": 374.8, "text": " And here it is."}, {"start": 374.8, "end": 380.48, "text": " In this forum, you can share your projects, ask for advice, look for collaborators and"}, {"start": 380.48, "end": 381.48, "text": " more."}, {"start": 381.48, "end": 389.6, "text": " Make sure to visit wmb.me slash paper forum and say hi or just click the link in the video"}, {"start": 389.6, "end": 390.6, "text": " description."}, {"start": 390.6, "end": 395.76, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 395.76, "end": 397.08, "text": " better videos for you."}, {"start": 397.08, "end": 401.03999999999996, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=L3G0dx1Q0R8
Google’s New AI: DALL-E, But Now In 3D! 🤯
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "DreamFusion: Text-to-3D using 2D Diffusion" is available here: https://dreamfusion3d.github.io/ Unofficial open source implementation: https://github.com/ashawkey/stable-dreamfusion Interpolation: https://twitter.com/xsteenbrugge/status/1558508866463219712 Full video of interpolation: https://www.youtube.com/watch?v=Bo3VZCjDhGI ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karozona Ifehr. Today, we are going to see how this new AI is able to take a piece of text from us, anything we wish, be it a squirrel dressed like the king of England, a car made out of sushi or a humanoid robot using a laptop and magically. It creates not an image like previous techniques, but get this a full 3D model of it. Wow, this is absolutely amazing, an AI that can not only create images, but create 3D assets. Yes, indeed, the result is a full 3D model that we can rotate around and even use in our virtual worlds. So, let's give it a really hard time and see together what it is capable of. For instance, open AI's earlier, Dali, text to image AI, was capable of looking at a bunch of images of koalas and separately a bunch of images of motorcycles, and it started to understand the concept of both and it was able to combine the two together into a completely new image. That is a koala riding a motorcycle. So, let's see if this new method is also capable of creating new concepts by building on previous knowledge. Well, let's see. Oh yes, here is a tiger wearing sunglasses and a leather jacket and most importantly riding a motorcycle. Tigers and motorcycles are well understood concepts. Of course, the neural network had plenty of these to look at in its training set, but combining the two concepts together, now that is a hint of creativity. Creativity in a machine, loving it. What I also loved about this work is that it makes it so easy to iterate on our ideas. For instance, first we can start experimenting with a real squirrel, or if we did not like it, we can quickly ask for a wooden carving, or even a metal sculpture of it. Then we can start dressing it up and make it do anything we want. And sometimes the results are nearly good enough to be used as is even in an animation movie, or in virtual worlds, or even in the worst cases, I think these could be used as a starting point for an artist to continue from. That would save a ton of time and energy in a lot of projects. And that is huge. Just consider all the miraculous things artists are using the Dolly Tool, Text to Image AI, and Stable Defusion 4, illustrating novels, texture synthesis, product design, weaving multiple images together to create these crazy movies, you name it. And now I wonder what unexpected uses will arise from this being possible for 3D models. Do you have some ideas? Let me know in the comments below. And just imagine what this will be capable of just a couple more papers down the line. For instance, the original Dolly AI was capable of this, and then just a year later this became possible. So how does this black magic work? Well, the cool thing is that this is also a diffusion-based technique, which means that similarly to the text to image AI's, it starts out from a piece of noise, and refines this noise over time to resemble our input text a little more. But this time, the diffusion process is running in higher dimensions, thus the result is not a 2D image, but a full 3D model. So, from now on, the limit in creating 3D worlds is not our artistic skill, the limit is only our imagination. What a time to be alive! Wates and biases provide tools to track your experiments in your deep learning projects. What you see here is their amazing sweeps feature, which helps you find and reproduce your best runs, and even better, what made this particular run the best. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wates and Biasis is free for all individuals, academics, and open-source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wates and Biasis for their long standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karozona Ifehr."}, {"start": 4.64, "end": 11.200000000000001, "text": " Today, we are going to see how this new AI is able to take a piece of text from us,"}, {"start": 11.200000000000001, "end": 16.48, "text": " anything we wish, be it a squirrel dressed like the king of England,"}, {"start": 16.48, "end": 23.48, "text": " a car made out of sushi or a humanoid robot using a laptop and magically."}, {"start": 23.48, "end": 30.92, "text": " It creates not an image like previous techniques, but get this a full 3D model of it."}, {"start": 30.92, "end": 37.32, "text": " Wow, this is absolutely amazing, an AI that can not only create images,"}, {"start": 37.32, "end": 39.96, "text": " but create 3D assets."}, {"start": 39.96, "end": 48.28, "text": " Yes, indeed, the result is a full 3D model that we can rotate around and even use in our virtual worlds."}, {"start": 48.28, "end": 54.6, "text": " So, let's give it a really hard time and see together what it is capable of."}, {"start": 54.6, "end": 62.2, "text": " For instance, open AI's earlier, Dali, text to image AI, was capable of looking at a bunch of images"}, {"start": 62.2, "end": 70.6, "text": " of koalas and separately a bunch of images of motorcycles, and it started to understand the concept"}, {"start": 70.6, "end": 77.32, "text": " of both and it was able to combine the two together into a completely new image."}, {"start": 77.32, "end": 80.75999999999999, "text": " That is a koala riding a motorcycle."}, {"start": 80.75999999999999, "end": 88.44, "text": " So, let's see if this new method is also capable of creating new concepts by building on previous"}, {"start": 88.44, "end": 97.0, "text": " knowledge. Well, let's see. Oh yes, here is a tiger wearing sunglasses and a leather jacket"}, {"start": 97.0, "end": 104.35999999999999, "text": " and most importantly riding a motorcycle. Tigers and motorcycles are well understood concepts."}, {"start": 104.36, "end": 109.4, "text": " Of course, the neural network had plenty of these to look at in its training set,"}, {"start": 109.4, "end": 114.76, "text": " but combining the two concepts together, now that is a hint of creativity."}, {"start": 115.4, "end": 123.0, "text": " Creativity in a machine, loving it. What I also loved about this work is that it makes it so"}, {"start": 123.0, "end": 130.6, "text": " easy to iterate on our ideas. For instance, first we can start experimenting with a real squirrel,"}, {"start": 130.6, "end": 138.12, "text": " or if we did not like it, we can quickly ask for a wooden carving, or even a metal sculpture of it."}, {"start": 138.51999999999998, "end": 143.95999999999998, "text": " Then we can start dressing it up and make it do anything we want."}, {"start": 143.95999999999998, "end": 151.48, "text": " And sometimes the results are nearly good enough to be used as is even in an animation movie,"}, {"start": 151.48, "end": 159.0, "text": " or in virtual worlds, or even in the worst cases, I think these could be used as a starting point"}, {"start": 159.0, "end": 165.88, "text": " for an artist to continue from. That would save a ton of time and energy in a lot of projects."}, {"start": 166.44, "end": 173.08, "text": " And that is huge. Just consider all the miraculous things artists are using the Dolly Tool,"}, {"start": 173.08, "end": 177.8, "text": " Text to Image AI, and Stable Defusion 4, illustrating novels,"}, {"start": 178.44, "end": 185.88, "text": " texture synthesis, product design, weaving multiple images together to create these crazy movies,"}, {"start": 185.88, "end": 193.79999999999998, "text": " you name it. And now I wonder what unexpected uses will arise from this being possible for 3D"}, {"start": 193.79999999999998, "end": 200.44, "text": " models. Do you have some ideas? Let me know in the comments below. And just imagine what this"}, {"start": 200.44, "end": 207.0, "text": " will be capable of just a couple more papers down the line. For instance, the original Dolly AI"}, {"start": 207.0, "end": 215.16, "text": " was capable of this, and then just a year later this became possible. So how does this black magic"}, {"start": 215.16, "end": 221.72, "text": " work? Well, the cool thing is that this is also a diffusion-based technique, which means that"}, {"start": 221.72, "end": 228.68, "text": " similarly to the text to image AI's, it starts out from a piece of noise, and refines this noise"}, {"start": 228.68, "end": 236.28, "text": " over time to resemble our input text a little more. But this time, the diffusion process is running"}, {"start": 236.28, "end": 243.07999999999998, "text": " in higher dimensions, thus the result is not a 2D image, but a full 3D model."}, {"start": 243.08, "end": 251.32000000000002, "text": " So, from now on, the limit in creating 3D worlds is not our artistic skill, the limit is only our"}, {"start": 251.32000000000002, "end": 258.04, "text": " imagination. What a time to be alive! Wates and biases provide tools to track your experiments"}, {"start": 258.04, "end": 263.88, "text": " in your deep learning projects. What you see here is their amazing sweeps feature, which helps"}, {"start": 263.88, "end": 271.64, "text": " you find and reproduce your best runs, and even better, what made this particular run the best."}, {"start": 271.64, "end": 278.44, "text": " It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more."}, {"start": 278.44, "end": 285.47999999999996, "text": " And the best part is that Wates and Biasis is free for all individuals, academics, and open-source"}, {"start": 285.47999999999996, "end": 292.84, "text": " projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video"}, {"start": 292.84, "end": 298.84, "text": " description, and you can get a free demo today. Our thanks to Wates and Biasis for their long"}, {"start": 298.84, "end": 304.11999999999995, "text": " standing support, and for helping us make better videos for you. Thanks for watching, and for"}, {"start": 304.12, "end": 333.96, "text": " your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=NRmkr50mkEE
Ray Tracing: How NVIDIA Solved the Impossible!
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The showcased papers are available here: https://research.nvidia.com/publication/2021-07_rearchitecting-spatiotemporal-resampling-production https://research.nvidia.com/publication/2022-07_generalized-resampled-importance-sampling-foundations-restir https://graphics.cs.utah.edu/research/projects/gris/ https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ Link to the talk at GTC: https://www.nvidia.com/en-us/on-demand/session/gtcfall22-a41171/ If you wish to learn more about light transport, I have a course that is free for everyone, no strings attached: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
And dear fellow scholars, this is two minute papers with Dr. Karojona Ifehir, or not quite. To be more exact, I have had the honor to hold a talk here at GTC, and today we are going to marvel at a seemingly impossible problem and four miracle papers from scientists at Nvidia. Why these four could these papers solve the impossible? Well, we shall see together in a moment. If we wish to create a truly gorgeous photorealistic scene in computer graphics, we usually reach out to a light transport simulation algorithm, and then this happens. Oh yes, concept number one. Noise. This is not photorealistic at all, not yet anyway. Why is that? Well, during this process, we have to shoot millions and millions of light rays into the scene to estimate how much light is bouncing around, and before we have simulated enough rays, the inaccuracies in our estimations show up as noise in these images. This clears up over time, but it may take from minutes to days for this to happen even for a smaller scene. For instance, this one took us three full weeks to finish. Three weeks. Yes, really. I am not kidding. Ouch. Solving this problem in real time seems absolutely impossible, which has been the consensus in the light transport research community for a long while. So much so that at the most prestigious computer graphics conference, C-Graph, there was even a course by the name, Ray Tracing is the future, and will ever be. This was a bit of a wordplay, yes, but I hope you now have a feel of how impossible this problem feels. And when I was starting out as a first-year PhD student, I was wondering whether real time light transport will be a possibility within my lifetime. It was such an outrageous idea I usually avoided even bringing up the question in conversation. And boy, if only I knew what we are going to be talking about today. Wow. So we are still at the point where these images take from hours to weeks to finish. And now I have good news and bad news. Let's go with the good news first. If you over here, some light transport researchers talking, this is why you hear the phrase important sampling a great deal. This means to choose where to shoot these rays in the scene. For instance, you see one of those smart algorithms here called Metropolis Light Transport. This is one of my favorites. It typically allocates these rays much smarter than previous techniques, especially on difficult scenes. But let's go even smarter. This is my other favorite, Vensai Jacob's Manyfold Exploration Paper at work here. This algorithm is absolutely incredible. And the way it develops an image over time is one of the most beautiful sites in all light transport research. So if we understand correctly, the more complex these algorithms are, the smarter they can get. However, at the same time, due to their complexity, they cannot be implemented so well on the graphics card. That is a big bummer. So what do we do? Do we use a simpler algorithm and take advantage of the ever-improving graphics cards in our machines or write something smarter and miss out on all of that? So now I can't believe I am saying this, but let's see how Nvidia solved the impossible through four amazing papers. And that is how they created real-time algorithms for light transport. Paper number one, Voxel Cone Tracing. Oh my, this is an iconic paper that was one of the first signs of something bigger to come. Now hold on to your papers and look at this. Oh my goodness, that is a beautiful real-time light simulation program. And it gets better because this one paper that is from 2011, a more than 10-year-old paper, and it could do all this. Wow, how is that even possible? We just discussed that we'd be lucky to have this in our lifetimes and it seems that it was already here 10 years ago. So what is going on here? Is light transport suddenly solved? Well, not quite. This solves not the full light transport simulation problem, but it makes it a little simpler. How? Well, it takes two big shortcuts. Shortcut number one, it subdivates the space into Voxels, small little boxes, and it runs the light simulation program on this reduced representation. Shortcut number two, it only computes two bounces for each light ray for the illumination. That is pretty good, but not nearly as great as a full solution with potentially infinitely many bounces. It also uses tons of memory. So plenty of things to improve here, but my goodness, if this was not a quantum leap in light transport simulation, I really don't know what is. This really shows that scientists at Nvidia are not afraid of completely rethinking existing systems to make them better, and boy, isn't this a marvelous example of that. And remember, all this in 2011, 2011, more than 10 years ago. Absolutely mind blowing. And one more thing, this is the combination of software and hardware working together, designing them for each other. This would not have been possible without it, but once again, this is not the full light transport. So can we be a little more ambitious and hope for a real time solution for the full light transport problem? Well, let's have a look together and find out. And here is where paper number two comes to the rescue. In this newer work of theirs, they presented an amusement park scene that contains a total of over 20 million triangles, and it truly is a sight to behold. Let's see, and oh my goodness, this does not take from minutes to days to compute, each of these images were produced in a matter of milliseconds. Wow, and it gets better, it can also render this scene with 3.4 million light sources, and this method can render not just an image, but an animation of it interactively. What's more, the more detailed comparisons in the paper reveal that this method is 10 to 100 times faster than previous techniques, and it also maps really well onto our graphics cards. Okay, but what is behind all this wizardry? How is this even possible? Well, the magic behind all this is a smarter allocation of these race samples that we have to shoot into the scene. For instance, this technique does not forget what we did just a moment to go when we move the camera a little to advance to the next image. Thus, lots of information that is otherwise thrown away can now be reused as we advance the animation. Now, note that there are so many papers out there on how to allocate these race properly, this field is so mature, it truly is a challenge to create something that is just a few percentage points better than previous techniques. It is very hard to make even the tiniest difference, and to be able to create something that is 10 to 100 times better in this environment, that is insanity. And this proper ray allocation has one more advantage. What is that? Well, have a look at this. Imagine that you are a good painter and you are given this image. Now your task is to finish it. Do you know what this depicts? Hmm, maybe. But knowing all the details of this image is out of question. Now, look, we don't have to live with these noisy images, we have the noisy algorithms tailored for light simulations. This one does some serious lag work with this noisy input, but even this one cannot possibly know exactly what is going on because there is so much information missing from the noisy input. And now, if you have been holding onto your paper so far, squeeze that paper because, look, this technique can produce this image in the same amount of time. Now we're talking. Now let's give this to the denoising algorithm and yes, we get a much sharper, more detailed output. Actually, let's compare it to the clean reference image. Yes, yes, yes, this is much closer. This really blows my mind. We are now one step closer to proper interactive light transport. Now note that I use the word interactively twice here. I did not say real time. And that is not by mistake. These techniques are absolutely fantastic. One of the bigger leaves in light transport research, but they still cost a little more than what production systems can shoulder. They are not quite real time yet. And I hope you know what's coming. Oh yes, paper number three, check this out. This is their more recent result on the Paris Opera House scene, which is quite detailed. There is a ton going on here. And you are all experienced fellow scholars now. So when you see them flicking between the raw noisy and the denoised results, you know exactly what is going on. And hold onto your papers because all this takes about 12 milliseconds per frame. That is over 80 frames per second. Yes, yes, yes, my goodness. That is finally in the real time domain. And then some. What a time to be alive. Okay, so where's the catch? Are keen eyes see that this is a static scene? It probably can deal with dynamic movements and rapid changes in lighting. Can it? Well, let's have a look. Wow, I cannot believe my eyes dynamic movement. Checkmark. And here this is as much change in the lighting as we would ever want. And it can do this too. And we are still not done yet. At this point, real time is fantastic. I cannot overstate how huge of an achievement that is. However, we need a little more to make sure that the technique works on a wide variety of practical cases. For instance, look here. Oh, yes, that is a ton of noise. And it's not only noise, it is the worst kind of noise. And that is high frequency noise. The beam of our existence. What does that mean? It means these bright fireflies. If you show that to a light transport researcher, they will scream and run away. Why is that? It is because these are light pass that are difficult to get to. And hence, take a ton more samples to clean up. And you know what is coming? Oh, of course, here is paper number four. Let's see what it can do for us. Am I seeing correctly? That is so much better. This seems nearly hopeless to clean up in a reasonable amount of time. And this, this might be ready to go as is with a good noise filter. How cool is that? Now, talking about difficult light pass, let's have a look at this beautiful, caustic pattern here. Do you see it? Well, of course, you don't. This region is so under-sampled, we not only can't see it, it is hard to even imagine what should be there. So, let's see if this new method can accelerate progress in this region. That is not true. That just cannot be true. When I first saw this paper, I could not believe this. And I had to recheck the results over and over again. This is, at the very least, a hundred times more developed caustic pattern. Once again, with a good noise filter, probably ready to go as is. I absolutely love this one. Now note that there are still shortcomings. None of these techniques are perfect. Artifacts can still appear here and there, and around specular and glossary reflections, things are still not as clear as the reference simulation. However, we now have a real time-light transport, and not only that, but the direction we are heading to is truly incredible. And amazing new papers are popping up what feels like every single month. And don't forget to apply the first law of papers, which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. Also, Nvidia is amazing at democratizing these tools and putting them into the hands of everyone. Their tech transfer track record is excellent. For instance, their Morbos demo is already out there, and not many know that they already have the noisy technique that is online and ready to use for all of us. This one is a professional grade tool right there. And many of the papers that you have heard about today may see the same fit. So, real time-light transport for all of us, sign me up right now. If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold onto your papers, because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So, join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 6.8, "text": " And dear fellow scholars, this is two minute papers with Dr. Karojona Ifehir, or not quite."}, {"start": 6.8, "end": 14.32, "text": " To be more exact, I have had the honor to hold a talk here at GTC, and today we are going to marvel"}, {"start": 14.32, "end": 21.44, "text": " at a seemingly impossible problem and four miracle papers from scientists at Nvidia."}, {"start": 22.16, "end": 29.36, "text": " Why these four could these papers solve the impossible? Well, we shall see together in a moment."}, {"start": 29.36, "end": 35.12, "text": " If we wish to create a truly gorgeous photorealistic scene in computer graphics,"}, {"start": 35.12, "end": 40.879999999999995, "text": " we usually reach out to a light transport simulation algorithm, and then this happens."}, {"start": 41.6, "end": 49.6, "text": " Oh yes, concept number one. Noise. This is not photorealistic at all, not yet anyway."}, {"start": 50.32, "end": 57.04, "text": " Why is that? Well, during this process, we have to shoot millions and millions of light rays"}, {"start": 57.04, "end": 63.44, "text": " into the scene to estimate how much light is bouncing around, and before we have simulated enough"}, {"start": 63.44, "end": 71.68, "text": " rays, the inaccuracies in our estimations show up as noise in these images. This clears up over time,"}, {"start": 71.68, "end": 77.36, "text": " but it may take from minutes to days for this to happen even for a smaller scene."}, {"start": 77.92, "end": 86.64, "text": " For instance, this one took us three full weeks to finish. Three weeks. Yes, really. I am not"}, {"start": 86.64, "end": 94.16, "text": " kidding. Ouch. Solving this problem in real time seems absolutely impossible, which has been the"}, {"start": 94.16, "end": 100.4, "text": " consensus in the light transport research community for a long while. So much so that at the most"}, {"start": 100.4, "end": 105.92, "text": " prestigious computer graphics conference, C-Graph, there was even a course by the name,"}, {"start": 105.92, "end": 113.52, "text": " Ray Tracing is the future, and will ever be. This was a bit of a wordplay, yes, but I hope you"}, {"start": 113.52, "end": 120.47999999999999, "text": " now have a feel of how impossible this problem feels. And when I was starting out as a first-year"}, {"start": 120.47999999999999, "end": 127.84, "text": " PhD student, I was wondering whether real time light transport will be a possibility within my"}, {"start": 127.84, "end": 135.92, "text": " lifetime. It was such an outrageous idea I usually avoided even bringing up the question in conversation."}, {"start": 135.92, "end": 144.39999999999998, "text": " And boy, if only I knew what we are going to be talking about today. Wow. So we are still at the"}, {"start": 144.39999999999998, "end": 152.88, "text": " point where these images take from hours to weeks to finish. And now I have good news and bad news."}, {"start": 152.88, "end": 158.95999999999998, "text": " Let's go with the good news first. If you over here, some light transport researchers talking,"}, {"start": 158.95999999999998, "end": 165.6, "text": " this is why you hear the phrase important sampling a great deal. This means to choose where to"}, {"start": 165.6, "end": 172.07999999999998, "text": " shoot these rays in the scene. For instance, you see one of those smart algorithms here called"}, {"start": 172.07999999999998, "end": 178.95999999999998, "text": " Metropolis Light Transport. This is one of my favorites. It typically allocates these rays"}, {"start": 178.95999999999998, "end": 186.24, "text": " much smarter than previous techniques, especially on difficult scenes. But let's go even smarter."}, {"start": 186.79999999999998, "end": 194.0, "text": " This is my other favorite, Vensai Jacob's Manyfold Exploration Paper at work here. This algorithm"}, {"start": 194.0, "end": 200.88, "text": " is absolutely incredible. And the way it develops an image over time is one of the most beautiful"}, {"start": 200.88, "end": 207.36, "text": " sites in all light transport research. So if we understand correctly, the more complex these"}, {"start": 207.36, "end": 214.0, "text": " algorithms are, the smarter they can get. However, at the same time, due to their complexity,"}, {"start": 214.0, "end": 221.84, "text": " they cannot be implemented so well on the graphics card. That is a big bummer. So what do we do?"}, {"start": 221.84, "end": 227.44, "text": " Do we use a simpler algorithm and take advantage of the ever-improving graphics cards in our"}, {"start": 227.44, "end": 235.84, "text": " machines or write something smarter and miss out on all of that? So now I can't believe I am"}, {"start": 235.84, "end": 242.88, "text": " saying this, but let's see how Nvidia solved the impossible through four amazing papers."}, {"start": 243.52, "end": 248.0, "text": " And that is how they created real-time algorithms for light transport."}, {"start": 248.0, "end": 256.8, "text": " Paper number one, Voxel Cone Tracing. Oh my, this is an iconic paper that was one of the first"}, {"start": 256.8, "end": 265.36, "text": " signs of something bigger to come. Now hold on to your papers and look at this. Oh my goodness,"}, {"start": 265.36, "end": 273.92, "text": " that is a beautiful real-time light simulation program. And it gets better because this one paper"}, {"start": 273.92, "end": 284.32, "text": " that is from 2011, a more than 10-year-old paper, and it could do all this. Wow, how is that even possible?"}, {"start": 284.32, "end": 290.08000000000004, "text": " We just discussed that we'd be lucky to have this in our lifetimes and it seems that it was already"}, {"start": 290.08000000000004, "end": 299.28000000000003, "text": " here 10 years ago. So what is going on here? Is light transport suddenly solved? Well, not quite."}, {"start": 299.28, "end": 305.44, "text": " This solves not the full light transport simulation problem, but it makes it a little simpler."}, {"start": 306.08, "end": 313.76, "text": " How? Well, it takes two big shortcuts. Shortcut number one, it subdivates the space"}, {"start": 313.76, "end": 321.67999999999995, "text": " into Voxels, small little boxes, and it runs the light simulation program on this reduced representation."}, {"start": 321.68, "end": 329.04, "text": " Shortcut number two, it only computes two bounces for each light ray for the illumination."}, {"start": 329.04, "end": 337.44, "text": " That is pretty good, but not nearly as great as a full solution with potentially infinitely many"}, {"start": 337.44, "end": 345.52, "text": " bounces. It also uses tons of memory. So plenty of things to improve here, but my goodness,"}, {"start": 345.52, "end": 350.88, "text": " if this was not a quantum leap in light transport simulation, I really don't know what is."}, {"start": 350.88, "end": 358.71999999999997, "text": " This really shows that scientists at Nvidia are not afraid of completely rethinking existing systems"}, {"start": 358.71999999999997, "end": 366.8, "text": " to make them better, and boy, isn't this a marvelous example of that. And remember, all this in 2011,"}, {"start": 366.8, "end": 376.8, "text": " 2011, more than 10 years ago. Absolutely mind blowing. And one more thing, this is the combination"}, {"start": 376.8, "end": 384.24, "text": " of software and hardware working together, designing them for each other. This would not have"}, {"start": 384.24, "end": 391.44, "text": " been possible without it, but once again, this is not the full light transport. So can we be a"}, {"start": 391.44, "end": 397.6, "text": " little more ambitious and hope for a real time solution for the full light transport problem?"}, {"start": 398.24, "end": 405.04, "text": " Well, let's have a look together and find out. And here is where paper number two comes to the"}, {"start": 405.04, "end": 411.68, "text": " rescue. In this newer work of theirs, they presented an amusement park scene that contains a total"}, {"start": 411.68, "end": 421.44, "text": " of over 20 million triangles, and it truly is a sight to behold. Let's see, and oh my goodness,"}, {"start": 421.44, "end": 427.44, "text": " this does not take from minutes to days to compute, each of these images were produced in a matter"}, {"start": 427.44, "end": 437.44, "text": " of milliseconds. Wow, and it gets better, it can also render this scene with 3.4 million light sources,"}, {"start": 437.44, "end": 443.84, "text": " and this method can render not just an image, but an animation of it interactively."}, {"start": 444.4, "end": 451.2, "text": " What's more, the more detailed comparisons in the paper reveal that this method is 10 to 100"}, {"start": 451.2, "end": 457.68, "text": " times faster than previous techniques, and it also maps really well onto our graphics cards."}, {"start": 458.32, "end": 463.92, "text": " Okay, but what is behind all this wizardry? How is this even possible?"}, {"start": 464.71999999999997, "end": 470.88, "text": " Well, the magic behind all this is a smarter allocation of these race samples that we have to"}, {"start": 470.88, "end": 477.91999999999996, "text": " shoot into the scene. For instance, this technique does not forget what we did just a moment to go"}, {"start": 477.92, "end": 484.48, "text": " when we move the camera a little to advance to the next image. Thus, lots of information that is"}, {"start": 484.48, "end": 491.84000000000003, "text": " otherwise thrown away can now be reused as we advance the animation. Now, note that there are"}, {"start": 491.84000000000003, "end": 498.32, "text": " so many papers out there on how to allocate these race properly, this field is so mature,"}, {"start": 498.32, "end": 504.8, "text": " it truly is a challenge to create something that is just a few percentage points better than previous"}, {"start": 504.8, "end": 511.84000000000003, "text": " techniques. It is very hard to make even the tiniest difference, and to be able to create something"}, {"start": 511.84000000000003, "end": 519.76, "text": " that is 10 to 100 times better in this environment, that is insanity. And this proper"}, {"start": 519.76, "end": 527.28, "text": " ray allocation has one more advantage. What is that? Well, have a look at this. Imagine that you"}, {"start": 527.28, "end": 534.5600000000001, "text": " are a good painter and you are given this image. Now your task is to finish it. Do you know what"}, {"start": 534.56, "end": 542.2399999999999, "text": " this depicts? Hmm, maybe. But knowing all the details of this image is out of question."}, {"start": 542.7199999999999, "end": 549.3599999999999, "text": " Now, look, we don't have to live with these noisy images, we have the noisy algorithms tailored"}, {"start": 549.3599999999999, "end": 556.64, "text": " for light simulations. This one does some serious lag work with this noisy input, but even this one"}, {"start": 556.64, "end": 562.88, "text": " cannot possibly know exactly what is going on because there is so much information missing"}, {"start": 562.88, "end": 569.6, "text": " from the noisy input. And now, if you have been holding onto your paper so far, squeeze that paper"}, {"start": 569.6, "end": 577.12, "text": " because, look, this technique can produce this image in the same amount of time. Now we're talking."}, {"start": 577.76, "end": 586.0, "text": " Now let's give this to the denoising algorithm and yes, we get a much sharper, more detailed"}, {"start": 586.0, "end": 594.24, "text": " output. Actually, let's compare it to the clean reference image. Yes, yes, yes, this is much closer."}, {"start": 594.56, "end": 601.36, "text": " This really blows my mind. We are now one step closer to proper interactive light transport."}, {"start": 602.0, "end": 610.16, "text": " Now note that I use the word interactively twice here. I did not say real time. And that is not"}, {"start": 610.16, "end": 616.7199999999999, "text": " by mistake. These techniques are absolutely fantastic. One of the bigger leaves in light transport"}, {"start": 616.7199999999999, "end": 624.0, "text": " research, but they still cost a little more than what production systems can shoulder. They are"}, {"start": 624.0, "end": 632.48, "text": " not quite real time yet. And I hope you know what's coming. Oh yes, paper number three, check this"}, {"start": 632.48, "end": 639.52, "text": " out. This is their more recent result on the Paris Opera House scene, which is quite detailed."}, {"start": 639.52, "end": 646.56, "text": " There is a ton going on here. And you are all experienced fellow scholars now. So when you see"}, {"start": 646.56, "end": 654.0799999999999, "text": " them flicking between the raw noisy and the denoised results, you know exactly what is going on."}, {"start": 654.0799999999999, "end": 663.12, "text": " And hold onto your papers because all this takes about 12 milliseconds per frame. That is over 80"}, {"start": 663.12, "end": 671.12, "text": " frames per second. Yes, yes, yes, my goodness. That is finally in the real time domain. And then some."}, {"start": 671.68, "end": 680.08, "text": " What a time to be alive. Okay, so where's the catch? Are keen eyes see that this is a static scene?"}, {"start": 680.08, "end": 687.12, "text": " It probably can deal with dynamic movements and rapid changes in lighting. Can it? Well, let's have a look."}, {"start": 687.12, "end": 697.12, "text": " Wow, I cannot believe my eyes dynamic movement. Checkmark. And here this is as much change in the"}, {"start": 697.12, "end": 704.72, "text": " lighting as we would ever want. And it can do this too. And we are still not done yet. At this"}, {"start": 704.72, "end": 712.5600000000001, "text": " point, real time is fantastic. I cannot overstate how huge of an achievement that is. However,"}, {"start": 712.56, "end": 719.04, "text": " we need a little more to make sure that the technique works on a wide variety of practical cases."}, {"start": 719.68, "end": 728.0799999999999, "text": " For instance, look here. Oh, yes, that is a ton of noise. And it's not only noise, it is the"}, {"start": 728.0799999999999, "end": 735.68, "text": " worst kind of noise. And that is high frequency noise. The beam of our existence. What does that mean?"}, {"start": 735.68, "end": 743.4399999999999, "text": " It means these bright fireflies. If you show that to a light transport researcher, they will scream and run away."}, {"start": 744.16, "end": 751.04, "text": " Why is that? It is because these are light pass that are difficult to get to. And hence,"}, {"start": 751.04, "end": 759.12, "text": " take a ton more samples to clean up. And you know what is coming? Oh, of course, here is paper"}, {"start": 759.12, "end": 766.5600000000001, "text": " number four. Let's see what it can do for us. Am I seeing correctly? That is so much better."}, {"start": 766.88, "end": 773.52, "text": " This seems nearly hopeless to clean up in a reasonable amount of time. And this, this might be"}, {"start": 773.52, "end": 782.08, "text": " ready to go as is with a good noise filter. How cool is that? Now, talking about difficult light"}, {"start": 782.08, "end": 789.2, "text": " pass, let's have a look at this beautiful, caustic pattern here. Do you see it? Well, of course,"}, {"start": 789.2, "end": 796.96, "text": " you don't. This region is so under-sampled, we not only can't see it, it is hard to even imagine"}, {"start": 796.96, "end": 803.5200000000001, "text": " what should be there. So, let's see if this new method can accelerate progress in this region."}, {"start": 803.52, "end": 812.24, "text": " That is not true. That just cannot be true. When I first saw this paper, I could not believe this."}, {"start": 812.24, "end": 818.56, "text": " And I had to recheck the results over and over again. This is, at the very least,"}, {"start": 818.56, "end": 824.56, "text": " a hundred times more developed caustic pattern. Once again, with a good noise filter,"}, {"start": 824.56, "end": 833.8399999999999, "text": " probably ready to go as is. I absolutely love this one. Now note that there are still shortcomings."}, {"start": 833.8399999999999, "end": 838.88, "text": " None of these techniques are perfect. Artifacts can still appear here and there,"}, {"start": 838.88, "end": 845.1999999999999, "text": " and around specular and glossary reflections, things are still not as clear as the reference"}, {"start": 845.1999999999999, "end": 852.3199999999999, "text": " simulation. However, we now have a real time-light transport, and not only that, but the direction"}, {"start": 852.32, "end": 859.2800000000001, "text": " we are heading to is truly incredible. And amazing new papers are popping up what feels like"}, {"start": 859.2800000000001, "end": 865.84, "text": " every single month. And don't forget to apply the first law of papers, which says that research"}, {"start": 865.84, "end": 872.1600000000001, "text": " is a process. Do not look at where we are, look at where we will be, two more papers down the line."}, {"start": 872.72, "end": 880.1600000000001, "text": " Also, Nvidia is amazing at democratizing these tools and putting them into the hands of everyone."}, {"start": 880.16, "end": 887.4399999999999, "text": " Their tech transfer track record is excellent. For instance, their Morbos demo is already out there,"}, {"start": 887.4399999999999, "end": 894.56, "text": " and not many know that they already have the noisy technique that is online and ready to use"}, {"start": 894.56, "end": 901.8399999999999, "text": " for all of us. This one is a professional grade tool right there. And many of the papers that you"}, {"start": 901.8399999999999, "end": 908.3199999999999, "text": " have heard about today may see the same fit. So, real time-light transport for all of us,"}, {"start": 908.32, "end": 916.88, "text": " sign me up right now. If you are looking for inexpensive cloud GPUs for AI, Lambda now offers"}, {"start": 916.88, "end": 924.08, "text": " the best prices in the world for GPU cloud compute. No commitments or negotiation required."}, {"start": 924.08, "end": 931.6, "text": " Just sign up and launch an instance. And hold onto your papers, because with Lambda GPU cloud,"}, {"start": 931.6, "end": 941.84, "text": " you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 941.84, "end": 950.08, "text": " That's 73% savings. Did I mention they also offer persistent storage? So, join researchers at"}, {"start": 950.08, "end": 958.5600000000001, "text": " organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers."}, {"start": 958.56, "end": 965.28, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 965.28, "end": 995.12, "text": " instances today. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wqvAconYgK0
Watch NVIDIA’s AI Teach This Human To Run! 🏃‍♂️
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Accelerated Policy Learning with Parallel Differentiable Simulation" is available here: https://short-horizon-actor-critic.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jolnai-Fehir. Today is going to be a really fun day where we do this and this and this. In a previous episode, we talked about creating virtual characters with a skeletal system adding more than 300 muscles and teaching them to use these muscles to kick, jump, move around, and perform other realistic human movements. You see the activated muscles with red. I am loving the idea which turns out comes with lots of really interesting applications. For instance, this simulation realistically portrays how increasing the amount of weight to be lifted changes what muscles are being trained during a workout. These agents also learn to jump really high and you can see a drastic difference between the movement required for a mediocre jump and an amazing one. And now, scientists at Nvidia had a crazy idea. They said, what if we take a similar model and ask it to learn to walk from scratch? Now that is indeed a crazy idea because they proposed a model that is driven by over 150 muscle tendon units and is thus very difficult to control. So, let's see how it went. First, it started too. Hello? Well, A plus for effort, little AI, but unfortunately this is not a great start. But don't despair, a little later it learned too. Well, fall in a different direction. However, at least some learning is hopefully happening. Look, I wouldn't say that it has finally taken the first step, but at least it is attempting to take a first step. Is that good news? Let's see. Oh yes, yes it is because a little later it learned to jog. This concept really works. And if we wait for a bit longer, we see that it learned to run as well. Fantastic. Now, let's have a closer look and see if the colors of the muscles indeed show us which ones are activated at a given moment. And that's right. When slowing the footage down, we see the difficulty of the problem. And that is, different tendons need to be moved every single moment. So, while we look at this technique, learning other tasks, we ask one of the most important questions here, and that is, how fast did it learn to run? It had to control 150 different tendons continuously over time without falling. So, how long did it take? Did it take days? And now, hold on to your papers because it hasn't taken days. It only takes minutes. After starting out like this, by the 17 minute mark, it has learned so much that it could jog. How amazing is that? And that is one of the key value propositions of this paper. It can not only teach this AI agent difficult tasks, but it can also learn up to 15 to 17 times faster than previous techniques. That is absolutely amazing. Bravo! So, it seems that now we have learning-based algorithms that could teach even a complex muscle-actuated robot to walk. What a time to be alive! This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text, or even generate it. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers, or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jolnai-Fehir."}, {"start": 4.4, "end": 12.16, "text": " Today is going to be a really fun day where we do this and this and this."}, {"start": 12.16, "end": 18.32, "text": " In a previous episode, we talked about creating virtual characters with a skeletal system"}, {"start": 18.32, "end": 27.68, "text": " adding more than 300 muscles and teaching them to use these muscles to kick, jump, move around,"}, {"start": 27.68, "end": 34.56, "text": " and perform other realistic human movements. You see the activated muscles with red."}, {"start": 34.56, "end": 41.84, "text": " I am loving the idea which turns out comes with lots of really interesting applications."}, {"start": 41.84, "end": 48.64, "text": " For instance, this simulation realistically portrays how increasing the amount of weight to be lifted"}, {"start": 48.64, "end": 52.96, "text": " changes what muscles are being trained during a workout."}, {"start": 52.96, "end": 59.28, "text": " These agents also learn to jump really high and you can see a drastic difference between"}, {"start": 59.28, "end": 63.84, "text": " the movement required for a mediocre jump and an amazing one."}, {"start": 64.4, "end": 71.84, "text": " And now, scientists at Nvidia had a crazy idea. They said, what if we take a similar model"}, {"start": 71.84, "end": 79.2, "text": " and ask it to learn to walk from scratch? Now that is indeed a crazy idea because they"}, {"start": 79.2, "end": 87.84, "text": " proposed a model that is driven by over 150 muscle tendon units and is thus very difficult to control."}, {"start": 87.84, "end": 91.76, "text": " So, let's see how it went. First, it started too."}, {"start": 93.52000000000001, "end": 101.04, "text": " Hello? Well, A plus for effort, little AI, but unfortunately this is not a great start."}, {"start": 101.04, "end": 108.08, "text": " But don't despair, a little later it learned too. Well, fall in a different direction."}, {"start": 108.08, "end": 115.28, "text": " However, at least some learning is hopefully happening. Look, I wouldn't say that it has finally"}, {"start": 115.28, "end": 121.75999999999999, "text": " taken the first step, but at least it is attempting to take a first step. Is that good news?"}, {"start": 122.56, "end": 131.44, "text": " Let's see. Oh yes, yes it is because a little later it learned to jog. This concept really works."}, {"start": 131.44, "end": 138.88, "text": " And if we wait for a bit longer, we see that it learned to run as well. Fantastic. Now,"}, {"start": 138.88, "end": 145.35999999999999, "text": " let's have a closer look and see if the colors of the muscles indeed show us which ones are"}, {"start": 145.35999999999999, "end": 152.64, "text": " activated at a given moment. And that's right. When slowing the footage down, we see the difficulty"}, {"start": 152.64, "end": 158.4, "text": " of the problem. And that is, different tendons need to be moved every single moment."}, {"start": 158.4, "end": 164.4, "text": " So, while we look at this technique, learning other tasks, we ask one of the most important"}, {"start": 164.4, "end": 171.76, "text": " questions here, and that is, how fast did it learn to run? It had to control 150 different"}, {"start": 171.76, "end": 179.12, "text": " tendons continuously over time without falling. So, how long did it take? Did it take days?"}, {"start": 179.68, "end": 185.92000000000002, "text": " And now, hold on to your papers because it hasn't taken days. It only takes minutes."}, {"start": 185.92, "end": 193.11999999999998, "text": " After starting out like this, by the 17 minute mark, it has learned so much that it could jog."}, {"start": 193.11999999999998, "end": 200.39999999999998, "text": " How amazing is that? And that is one of the key value propositions of this paper. It can not only"}, {"start": 200.39999999999998, "end": 209.76, "text": " teach this AI agent difficult tasks, but it can also learn up to 15 to 17 times faster than previous"}, {"start": 209.76, "end": 217.28, "text": " techniques. That is absolutely amazing. Bravo! So, it seems that now we have learning-based"}, {"start": 217.28, "end": 225.04, "text": " algorithms that could teach even a complex muscle-actuated robot to walk. What a time to be alive!"}, {"start": 225.04, "end": 231.92, "text": " This episode has been supported by CoHear AI. CoHear builds large language models and makes them"}, {"start": 231.92, "end": 237.84, "text": " available through an API so businesses can add advanced language understanding to their system"}, {"start": 237.84, "end": 245.28, "text": " or app quickly with just one line of code. You can use your own data, whether it's text from"}, {"start": 245.28, "end": 252.08, "text": " customer service requests, legal contracts, or social media posts to create your own custom models"}, {"start": 252.08, "end": 259.92, "text": " to understand text, or even generate it. For instance, it can be used to automatically determine"}, {"start": 259.92, "end": 267.36, "text": " whether your messages are about your business hours, returns, or shipping, or it can be used to"}, {"start": 267.36, "end": 273.36, "text": " generate a list of possible sentences you can use for your product descriptions. Make sure to go"}, {"start": 273.36, "end": 280.16, "text": " to CoHear.ai slash papers, or click the link in the video description and give it a try today."}, {"start": 280.16, "end": 299.76000000000005, "text": " It's super easy to use. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8NAi30ZBpJU
Wow, A Simulation That Looks Like Reality! 🤯
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 My paper "The flow from simulation to reality" with clickable citations is available here: https://www.nature.com/articles/s41567-022-01788-5 📝 Read it for free here! https://rdcu.be/cWPfD ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Kato Zsolnai-Fehir. I am so happy to tell you that today my dream came true. I can't believe it. So, what happened? Well, I am going to show you my new paper that just came out. This is a short comment article published in Nature Physics where I had the opportunity to be a bit of an ambassador for the Computer Graphics Research Community. You see, this is a small field, but it has so many absolutely amazing papers. I am really honored for the opportunity to show them to the entire world. So, let's marvel together today as you will see some amazing papers throughout this video. For instance, what about this one? What are we seeing here? Yes, what you see here is a physics simulation from a previous technique that is so detailed it is almost indistinguishable from reality. So, what you are seeing here should be absolutely impossible. Why is that? Well, to find out, we are going to look at two different branches of fluid simulations. Scientists who work in the area of computational fluid dynamics have been able to write such simulations for more than 60 years now. And these are the really rigorous simulations that can accurately tell how nature works in hypothetical situations. This is super useful. For instance, for wind tunnel tests for cars, testing different aircraft and turbine designs and more. However, these simulations can easily take from days to weeks to compute. Now, here is the other branch. Yes, that is Computer Graphics. What is the goal for Computer Graphics simulation research? Well, the goal is exactly what you see here. Yes, for us, graphics people, a simulation does not need to be perfect. The goal is to create a simulation that is good enough to fool the human eye. These are excellent for feature-length movies, virtual worlds, quick visualizations and more. And here is the best part. In return, most computer graphics techniques don't run from days to weeks. They run in a few minutes to a few hours at most. So, computational fluid dynamics slow, but rigorous, computer graphics fast, but approximate. These are two completely different branches. Now, of course, any surface-pecting scholar would now ask, OK, but Karoi, how do you optimize an algorithm that took weeks to run to now run in a matter of minutes? Well, here are two amazing ideas to perform that. One is spatial adaptivity. What does that do? Well, it does this. Oh, yes, it allocates most of our computational resources to regions where the real action happens. You see these more turbulent regions with dark blue. The orange or yellow parts indicate calmer regions where we can get away with less computation. You see, this is an approximate model, but a really good one, which speeds up the computation, a great deal, and a typically small cost. That's the whole mark of a good computer graphics paper. Two, in my opinion, this is one of the most beautiful papers in all of computer graphics, surface-only liquids, legendary work. Here, we lean on the observation that most of the interesting detail in a fluid simulation is visible on the surface of the liquids, so why not concentrate on that? Now, of course, this is easier said than done, and this work really pulled it off. But we are not stopping here. We can also use this amazing paper to add realistic bubbles to a simulation, and you will see in a moment that it greatly enhances the realism of these videos. Look at that! I absolutely love it! If the video ends here, it is because I cannot stop looking at and running these simulations. Okay, it took a while, but I am back. I have recovered. So, the key to this work is that it does not do the full surface tension calculations that are typically required for simulating bubbles. No, no, that would be prohibitively expensive. It just says that the look bubbles typically appear in regions where air gets trapped by the fluid. And it also proposes an algorithm to find these regions. And the algorithm is so simple, I could not believe my eyes. I thought that this was some kind of a prank. And it turns out we only need to look at the curvature of the fluid and only nearby, which is really cheap, simple and fast. And it really works. I can't believe it. Once again, this is an approximate solution, not the real deal, but it is extremely simple and fast, and it can even be added as a post-processing step to a simulation that is already finished without the bubbles. How cool is that? Loving it. Now, wait, we just said that surface tension computations are out of reach. That just costs too much. Or does it? Here is an incredible graphics paper from last year that does just that. So what is all this surface tension thing good for? Well, for instance, it can simulate this paper clip floating on water. That is quite remarkable because the density of the paper clip is 8 times as much as the water itself, and yet it still sits on top of the water. We can also drop a bunch of cherries into water and milk and get these beautiful simulations. Yes, these are not real videos. These are all simulated on a computer. Can you tell the difference? I'd love to know. Let me know in the comments below. And get this for simpler scenes. We only need to wait for a few seconds for each frame. That is insanity. I told you that there are some magical works within computer graphics. I am so happy for the opportunity to share them with you. However, wait a second. Remember, we said that these graphics works are fast and approximate. But is this really true? Can they really hold the candle to these rigorous computational fluid dynamics simulations that are super accurate, but take so long that is impossible? Right? A quick simulation on our home computer cannot possibly tell us anything new about aircraft designs. Can it? Well, hold on to your papers because it can. And it not only can, but you are already looking at it. This is a devilishly difficult test, a real aerodynamic simulation in a wind tunnel. In these cases, getting really accurate results is critical. For instance, here we would like to see that if we were to add a spoiler to this car, how much of an aerodynamic advantage we would get in return. Here are the results from the real wind tunnel test. And now, let's see how the new method compares. Wow! Goodness. It is not perfect by any means, but seems accurate enough that we can see the wake flow of the car clearly enough so that we can make an informed decision on that spoiler. Yes, this and even more is possible with computer graphics algorithms. These approximate solutions became so accurate and so fast that we are seeing these two seemingly completely different branches, computational fluid dynamics, and computer graphics getting closer and closer to each other. Even just a couple years ago, I did not dare to think that this would ever be possible. And yet, these fast and predictive simulations are within arms reach, and just a couple more papers down the line, and we might be able to enter a world where an engineer is able to test new ideas in aircraft design every few minutes. What a time to be alive! So, all this came about because I am worried that as the field of computer graphics is so small, there are some true gems out there, and if we don't talk about these works, I am afraid that almost no one will. And this is why two minute papers and this paper came into existence. And I am so happy to have a common paper accepted to nature physics, and now to be able to show it to you. By the way, the paper is a quick read, and you can read it for free through the link in the video description. I think we are getting so close to real life-lack simulations everyone has to hear about it. So cool! If you wish to help me spread the word about these incredible works, please consider sharing this with your friends and tweeting about it. I would also like to send a big thank you for this amazing opportunity to write this paper and to Christopher Betty who sent super useful comments. So, what do you think? What would you use these techniques for? Let me know in the comments below. If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold on to your papers because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Kato Zsolnai-Fehir."}, {"start": 4.64, "end": 9.52, "text": " I am so happy to tell you that today my dream came true."}, {"start": 9.52, "end": 11.4, "text": " I can't believe it."}, {"start": 11.4, "end": 13.200000000000001, "text": " So, what happened?"}, {"start": 13.200000000000001, "end": 17.36, "text": " Well, I am going to show you my new paper that just came out."}, {"start": 17.36, "end": 21.56, "text": " This is a short comment article published in Nature Physics"}, {"start": 21.56, "end": 28.64, "text": " where I had the opportunity to be a bit of an ambassador for the Computer Graphics Research Community."}, {"start": 28.64, "end": 35.8, "text": " You see, this is a small field, but it has so many absolutely amazing papers."}, {"start": 35.8, "end": 41.0, "text": " I am really honored for the opportunity to show them to the entire world."}, {"start": 41.0, "end": 47.760000000000005, "text": " So, let's marvel together today as you will see some amazing papers throughout this video."}, {"start": 47.760000000000005, "end": 50.36, "text": " For instance, what about this one?"}, {"start": 50.36, "end": 52.2, "text": " What are we seeing here?"}, {"start": 52.2, "end": 57.0, "text": " Yes, what you see here is a physics simulation from a previous technique"}, {"start": 57.0, "end": 62.24, "text": " that is so detailed it is almost indistinguishable from reality."}, {"start": 62.24, "end": 67.12, "text": " So, what you are seeing here should be absolutely impossible."}, {"start": 67.12, "end": 68.64, "text": " Why is that?"}, {"start": 68.64, "end": 74.48, "text": " Well, to find out, we are going to look at two different branches of fluid simulations."}, {"start": 74.48, "end": 78.76, "text": " Scientists who work in the area of computational fluid dynamics"}, {"start": 78.76, "end": 84.0, "text": " have been able to write such simulations for more than 60 years now."}, {"start": 84.0, "end": 88.52, "text": " And these are the really rigorous simulations that can accurately tell"}, {"start": 88.52, "end": 92.0, "text": " how nature works in hypothetical situations."}, {"start": 92.0, "end": 93.88, "text": " This is super useful."}, {"start": 93.88, "end": 98.56, "text": " For instance, for wind tunnel tests for cars, testing different aircraft"}, {"start": 98.56, "end": 101.24000000000001, "text": " and turbine designs and more."}, {"start": 101.24000000000001, "end": 107.24000000000001, "text": " However, these simulations can easily take from days to weeks to compute."}, {"start": 107.24000000000001, "end": 109.68, "text": " Now, here is the other branch."}, {"start": 109.68, "end": 112.2, "text": " Yes, that is Computer Graphics."}, {"start": 112.2, "end": 116.0, "text": " What is the goal for Computer Graphics simulation research?"}, {"start": 116.0, "end": 120.0, "text": " Well, the goal is exactly what you see here."}, {"start": 120.0, "end": 125.32000000000001, "text": " Yes, for us, graphics people, a simulation does not need to be perfect."}, {"start": 125.32000000000001, "end": 131.0, "text": " The goal is to create a simulation that is good enough to fool the human eye."}, {"start": 131.0, "end": 135.4, "text": " These are excellent for feature-length movies, virtual worlds,"}, {"start": 135.4, "end": 137.72, "text": " quick visualizations and more."}, {"start": 137.72, "end": 142.84, "text": " And here is the best part. In return, most computer graphics techniques"}, {"start": 142.84, "end": 144.84, "text": " don't run from days to weeks."}, {"start": 144.84, "end": 149.64, "text": " They run in a few minutes to a few hours at most."}, {"start": 149.64, "end": 154.28, "text": " So, computational fluid dynamics slow, but rigorous,"}, {"start": 154.28, "end": 157.64, "text": " computer graphics fast, but approximate."}, {"start": 157.64, "end": 160.84, "text": " These are two completely different branches."}, {"start": 160.84, "end": 164.76, "text": " Now, of course, any surface-pecting scholar would now ask,"}, {"start": 164.76, "end": 170.67999999999998, "text": " OK, but Karoi, how do you optimize an algorithm that took weeks to run"}, {"start": 170.67999999999998, "end": 173.64, "text": " to now run in a matter of minutes?"}, {"start": 173.64, "end": 177.88, "text": " Well, here are two amazing ideas to perform that."}, {"start": 177.88, "end": 180.35999999999999, "text": " One is spatial adaptivity."}, {"start": 180.35999999999999, "end": 181.79999999999998, "text": " What does that do?"}, {"start": 181.79999999999998, "end": 184.04, "text": " Well, it does this."}, {"start": 184.04, "end": 188.04, "text": " Oh, yes, it allocates most of our computational resources"}, {"start": 188.04, "end": 190.84, "text": " to regions where the real action happens."}, {"start": 190.84, "end": 194.51999999999998, "text": " You see these more turbulent regions with dark blue."}, {"start": 194.52, "end": 198.12, "text": " The orange or yellow parts indicate calmer regions"}, {"start": 198.12, "end": 201.0, "text": " where we can get away with less computation."}, {"start": 201.0, "end": 203.72, "text": " You see, this is an approximate model,"}, {"start": 203.72, "end": 207.24, "text": " but a really good one, which speeds up the computation,"}, {"start": 207.24, "end": 210.68, "text": " a great deal, and a typically small cost."}, {"start": 210.68, "end": 214.28, "text": " That's the whole mark of a good computer graphics paper."}, {"start": 214.28, "end": 218.92000000000002, "text": " Two, in my opinion, this is one of the most beautiful papers"}, {"start": 218.92000000000002, "end": 224.04000000000002, "text": " in all of computer graphics, surface-only liquids, legendary work."}, {"start": 224.04, "end": 228.28, "text": " Here, we lean on the observation that most of the interesting detail"}, {"start": 228.28, "end": 232.35999999999999, "text": " in a fluid simulation is visible on the surface of the liquids,"}, {"start": 232.35999999999999, "end": 235.39999999999998, "text": " so why not concentrate on that?"}, {"start": 235.39999999999998, "end": 238.44, "text": " Now, of course, this is easier said than done,"}, {"start": 238.44, "end": 241.32, "text": " and this work really pulled it off."}, {"start": 241.32, "end": 243.88, "text": " But we are not stopping here."}, {"start": 243.88, "end": 249.79999999999998, "text": " We can also use this amazing paper to add realistic bubbles to a simulation,"}, {"start": 249.79999999999998, "end": 253.79999999999998, "text": " and you will see in a moment that it greatly enhances the realism"}, {"start": 253.8, "end": 255.96, "text": " of these videos."}, {"start": 255.96, "end": 257.56, "text": " Look at that!"}, {"start": 257.56, "end": 259.96000000000004, "text": " I absolutely love it!"}, {"start": 259.96000000000004, "end": 264.92, "text": " If the video ends here, it is because I cannot stop looking at"}, {"start": 264.92, "end": 267.72, "text": " and running these simulations."}, {"start": 267.72, "end": 271.08000000000004, "text": " Okay, it took a while, but I am back."}, {"start": 271.08000000000004, "end": 272.76, "text": " I have recovered."}, {"start": 272.76, "end": 276.12, "text": " So, the key to this work is that it does not do"}, {"start": 276.12, "end": 280.6, "text": " the full surface tension calculations that are typically required"}, {"start": 280.6, "end": 282.52, "text": " for simulating bubbles."}, {"start": 282.52, "end": 286.44, "text": " No, no, that would be prohibitively expensive."}, {"start": 286.44, "end": 290.03999999999996, "text": " It just says that the look bubbles typically appear"}, {"start": 290.03999999999996, "end": 294.28, "text": " in regions where air gets trapped by the fluid."}, {"start": 294.28, "end": 298.52, "text": " And it also proposes an algorithm to find these regions."}, {"start": 298.52, "end": 303.08, "text": " And the algorithm is so simple, I could not believe my eyes."}, {"start": 303.08, "end": 306.12, "text": " I thought that this was some kind of a prank."}, {"start": 306.12, "end": 311.0, "text": " And it turns out we only need to look at the curvature of the fluid"}, {"start": 311.0, "end": 315.88, "text": " and only nearby, which is really cheap, simple and fast."}, {"start": 315.88, "end": 317.64, "text": " And it really works."}, {"start": 317.64, "end": 319.4, "text": " I can't believe it."}, {"start": 319.4, "end": 323.72, "text": " Once again, this is an approximate solution, not the real deal,"}, {"start": 323.72, "end": 328.84, "text": " but it is extremely simple and fast, and it can even be added"}, {"start": 328.84, "end": 331.48, "text": " as a post-processing step to a simulation"}, {"start": 331.48, "end": 334.84, "text": " that is already finished without the bubbles."}, {"start": 334.84, "end": 337.08, "text": " How cool is that?"}, {"start": 337.08, "end": 338.28, "text": " Loving it."}, {"start": 338.28, "end": 342.2, "text": " Now, wait, we just said that surface tension computations"}, {"start": 342.2, "end": 343.64, "text": " are out of reach."}, {"start": 343.64, "end": 346.03999999999996, "text": " That just costs too much."}, {"start": 346.03999999999996, "end": 347.71999999999997, "text": " Or does it?"}, {"start": 347.71999999999997, "end": 352.91999999999996, "text": " Here is an incredible graphics paper from last year that does just that."}, {"start": 352.91999999999996, "end": 357.0, "text": " So what is all this surface tension thing good for?"}, {"start": 357.0, "end": 362.28, "text": " Well, for instance, it can simulate this paper clip floating on water."}, {"start": 362.28, "end": 366.59999999999997, "text": " That is quite remarkable because the density of the paper clip"}, {"start": 366.6, "end": 369.88, "text": " is 8 times as much as the water itself,"}, {"start": 369.88, "end": 373.40000000000003, "text": " and yet it still sits on top of the water."}, {"start": 373.40000000000003, "end": 377.48, "text": " We can also drop a bunch of cherries into water and milk"}, {"start": 377.48, "end": 380.04, "text": " and get these beautiful simulations."}, {"start": 380.04, "end": 382.68, "text": " Yes, these are not real videos."}, {"start": 382.68, "end": 386.20000000000005, "text": " These are all simulated on a computer."}, {"start": 386.20000000000005, "end": 387.8, "text": " Can you tell the difference?"}, {"start": 387.8, "end": 389.0, "text": " I'd love to know."}, {"start": 389.0, "end": 390.92, "text": " Let me know in the comments below."}, {"start": 390.92, "end": 393.64000000000004, "text": " And get this for simpler scenes."}, {"start": 393.64, "end": 397.88, "text": " We only need to wait for a few seconds for each frame."}, {"start": 397.88, "end": 399.8, "text": " That is insanity."}, {"start": 399.8, "end": 404.52, "text": " I told you that there are some magical works within computer graphics."}, {"start": 404.52, "end": 408.91999999999996, "text": " I am so happy for the opportunity to share them with you."}, {"start": 408.91999999999996, "end": 411.15999999999997, "text": " However, wait a second."}, {"start": 411.15999999999997, "end": 416.59999999999997, "text": " Remember, we said that these graphics works are fast and approximate."}, {"start": 416.59999999999997, "end": 418.68, "text": " But is this really true?"}, {"start": 418.68, "end": 422.76, "text": " Can they really hold the candle to these rigorous computational fluid dynamics"}, {"start": 422.76, "end": 429.32, "text": " simulations that are super accurate, but take so long that is impossible?"}, {"start": 429.32, "end": 430.12, "text": " Right?"}, {"start": 430.12, "end": 435.15999999999997, "text": " A quick simulation on our home computer cannot possibly tell us anything new"}, {"start": 435.15999999999997, "end": 436.76, "text": " about aircraft designs."}, {"start": 436.76, "end": 437.56, "text": " Can it?"}, {"start": 437.56, "end": 441.08, "text": " Well, hold on to your papers because it can."}, {"start": 441.08, "end": 445.56, "text": " And it not only can, but you are already looking at it."}, {"start": 445.56, "end": 447.96, "text": " This is a devilishly difficult test,"}, {"start": 447.96, "end": 452.03999999999996, "text": " a real aerodynamic simulation in a wind tunnel."}, {"start": 452.04, "end": 456.76000000000005, "text": " In these cases, getting really accurate results is critical."}, {"start": 456.76000000000005, "end": 462.92, "text": " For instance, here we would like to see that if we were to add a spoiler to this car,"}, {"start": 462.92, "end": 467.24, "text": " how much of an aerodynamic advantage we would get in return."}, {"start": 467.24, "end": 470.52000000000004, "text": " Here are the results from the real wind tunnel test."}, {"start": 470.52000000000004, "end": 474.44, "text": " And now, let's see how the new method compares."}, {"start": 474.44, "end": 475.56, "text": " Wow!"}, {"start": 475.56, "end": 476.52000000000004, "text": " Goodness."}, {"start": 476.52000000000004, "end": 478.84000000000003, "text": " It is not perfect by any means,"}, {"start": 478.84, "end": 484.84, "text": " but seems accurate enough that we can see the wake flow of the car clearly enough"}, {"start": 484.84, "end": 488.28, "text": " so that we can make an informed decision on that spoiler."}, {"start": 488.91999999999996, "end": 494.2, "text": " Yes, this and even more is possible with computer graphics algorithms."}, {"start": 494.2, "end": 500.52, "text": " These approximate solutions became so accurate and so fast that we are seeing these two"}, {"start": 500.52, "end": 505.0, "text": " seemingly completely different branches, computational fluid dynamics,"}, {"start": 505.0, "end": 509.32, "text": " and computer graphics getting closer and closer to each other."}, {"start": 509.32, "end": 515.32, "text": " Even just a couple years ago, I did not dare to think that this would ever be possible."}, {"start": 515.32, "end": 520.04, "text": " And yet, these fast and predictive simulations are within arms reach,"}, {"start": 520.04, "end": 522.92, "text": " and just a couple more papers down the line,"}, {"start": 522.92, "end": 529.32, "text": " and we might be able to enter a world where an engineer is able to test new ideas"}, {"start": 529.32, "end": 532.04, "text": " in aircraft design every few minutes."}, {"start": 532.04, "end": 534.04, "text": " What a time to be alive!"}, {"start": 534.04, "end": 541.64, "text": " So, all this came about because I am worried that as the field of computer graphics is so small,"}, {"start": 541.64, "end": 544.04, "text": " there are some true gems out there,"}, {"start": 544.04, "end": 546.36, "text": " and if we don't talk about these works,"}, {"start": 546.36, "end": 549.4, "text": " I am afraid that almost no one will."}, {"start": 549.4, "end": 554.8399999999999, "text": " And this is why two minute papers and this paper came into existence."}, {"start": 554.8399999999999, "end": 559.9599999999999, "text": " And I am so happy to have a common paper accepted to nature physics,"}, {"start": 559.9599999999999, "end": 562.52, "text": " and now to be able to show it to you."}, {"start": 562.52, "end": 565.0, "text": " By the way, the paper is a quick read,"}, {"start": 565.0, "end": 569.0799999999999, "text": " and you can read it for free through the link in the video description."}, {"start": 569.0799999999999, "end": 574.76, "text": " I think we are getting so close to real life-lack simulations everyone has to hear about it."}, {"start": 575.48, "end": 576.6, "text": " So cool!"}, {"start": 576.6, "end": 580.4399999999999, "text": " If you wish to help me spread the word about these incredible works,"}, {"start": 580.4399999999999, "end": 584.76, "text": " please consider sharing this with your friends and tweeting about it."}, {"start": 584.76, "end": 590.4399999999999, "text": " I would also like to send a big thank you for this amazing opportunity to write this paper"}, {"start": 590.44, "end": 594.6, "text": " and to Christopher Betty who sent super useful comments."}, {"start": 594.6, "end": 596.36, "text": " So, what do you think?"}, {"start": 596.36, "end": 598.6800000000001, "text": " What would you use these techniques for?"}, {"start": 598.6800000000001, "end": 600.5200000000001, "text": " Let me know in the comments below."}, {"start": 600.5200000000001, "end": 604.7600000000001, "text": " If you are looking for inexpensive cloud GPUs for AI,"}, {"start": 604.7600000000001, "end": 610.9200000000001, "text": " Lambda now offers the best prices in the world for GPU cloud compute."}, {"start": 610.9200000000001, "end": 613.72, "text": " No commitments or negotiation required."}, {"start": 613.72, "end": 616.7600000000001, "text": " Just sign up and launch an instance."}, {"start": 616.76, "end": 621.16, "text": " And hold on to your papers because with Lambda GPU cloud,"}, {"start": 621.16, "end": 631.3199999999999, "text": " you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 631.3199999999999, "end": 634.04, "text": " That's 73% savings."}, {"start": 634.04, "end": 637.56, "text": " Did I mention they also offer persistent storage?"}, {"start": 637.56, "end": 643.4, "text": " So join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 643.4, "end": 647.9599999999999, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 647.9599999999999, "end": 655.8, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 655.8, "end": 682.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XW_nO2NMH_g
Google's AI: Stable Diffusion On Steroids! 💪
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: http://wandb.me/prompt2prompt 📝 The paper "Prompt-to-Prompt Image Editing with Cross Attention Control" is available here: https://arxiv.org/abs/2208.01626 Unofficial open source implementation: https://github.com/bloc97/CrossAttentionControl ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Stable Diffusion frame interpolation: https://twitter.com/xsteenbrugge/status/1558508866463219712 Full video of interpolation: https://www.youtube.com/watch?v=Bo3VZCjDhGI 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to have a look at how this new paper just supercharged A-A driven image generation. For instance, you will see that it can do this and this and even this. And today it also seems clear and clear that we are entering the age of A-A driven image generation. You see, these new learning based methods can do something that previously was only possible in science fiction movies. And that is, we enter what we wish to see and the AI paints an image for us. Last year, this image was possible, this year, this image is possible. That is incredible progress in just one year. So, I wonder what is the next step? Beyond the regular quality increases, how else could we improve these systems? Well, scientists at Google had a fantastic idea. In this paper, they promise prompt-to-prommed editing. What is that? What problem does this solve? Well, whenever we create an image and we feel mostly satisfied with it, but we would need to add just a little change to it. We cannot easily do that. But now, have a look at five of my favorite examples of doing exactly this with this new method. One, if we create this imaginary cat riding a bike and we are happy with this concept, but after taking some driving lessons, our little imaginary cat wishes to get a car now. Well, now it is possible. Just change the prompt and get the same image with minimal modifications to satisfy the changes we have made. I love it. Interestingly, it has also become a bit of a chunker in the process, a testament to how healthy it is to ride the bike instead. And two, if we are yearning for bigger changes, we can use a photo and change its style as if it were painted by a child. And I have to say, this one is very convincing. Three, and now, hold onto your papers and behold the cake generator AI. Previously, if we created this lemon cake and wish to create other variants of it, for instance, a cheesecake or an apple cake, we got a completely different result. These variants don't have a great deal to do with the original photo. And I wonder, would it be possible with the new technique that, oh my goodness, yes, look at that. These cakes are not only delicious, but they are also a real variants of the original slice. Yum, this is fantastic. So, AI generated cuisine, huh? Sign me up right now. Four, after generating a car at the side of the street, we can even say how we wish to change the car itself. For instance, let's make it a sports car instead. Or if we are happy with the original car, we can also ask the AI to leave the car intact and change its surroundings instead. Let's put it on a flooded street. Or quickly, before water damage happens, put it in Manhattan instead. Excellent. Loving it. Now, of course, you see that not even this technique is perfect, the car still has changed a little, but that is something that will surely be addressed a couple more papers down the line. Five, we can even engage in mask-based editing. If we feel that this beautiful cat also deserves a beautiful shirt, we can delete this part of the screen, then the AI will start from a piece of noise and morph it until it becomes a shirt. How cool is that? So good. It works for many different kinds of apparel too. And while we marvel at some more of these amazing examples, I would like to tell you one more thing that I loved about this paper. And that is, it describes a general concept. Why is this super cool? Well, it is super cool because it can be applied to different image generators. If you look carefully here, you see that this concept was applied to Google's own closed solution, image here. And I hope you know what's coming now. Oh, yes, a free and open source text to image synthesizer is also available and it goes by the name StableDefusion, we celebrated it coming into existence a few episodes ago. But why am I so excited about this? Well, with StableDefusion, we can finally take out our digital range and tinker with it. For instance, we can now adjust the internal parameters in ways that we cannot do with the closed solutions like Dolly II and Imogen. So let's have a look at why that matters. Do you see the prompts here? Of course you do. Now, what else do you see? Parameters. Yes, this means that the hood is popped open, we can not only look into the inner workings of the AI, but we can also play with them. And thus, these results become reproducible at home. So much so that there is already an unofficial open source implementation of this new technique applied to StableDefusion. Both of these are free for everyone to run. I am loving this. What a time to be alive. And once again, this showcases the power of the papers and the power of the community. The links are available in the video description. And for now, let the experiments begin. What you see here is a report of this exact paper we have talked about which was made by Wates and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wates and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that Wates and Biasis is free for all individuals, academics and open source projects. Make sure to visit them through www.nb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wates and Biasis for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 12.16, "text": " Today we are going to have a look at how this new paper just supercharged A-A driven image generation."}, {"start": 12.16, "end": 19.28, "text": " For instance, you will see that it can do this and this and even this."}, {"start": 19.28, "end": 26.88, "text": " And today it also seems clear and clear that we are entering the age of A-A driven image generation."}, {"start": 26.88, "end": 33.04, "text": " You see, these new learning based methods can do something that previously was only possible"}, {"start": 33.04, "end": 40.64, "text": " in science fiction movies. And that is, we enter what we wish to see and the AI paints an image for us."}, {"start": 40.64, "end": 46.32, "text": " Last year, this image was possible, this year, this image is possible."}, {"start": 46.32, "end": 50.16, "text": " That is incredible progress in just one year."}, {"start": 50.16, "end": 59.36, "text": " So, I wonder what is the next step? Beyond the regular quality increases, how else could we improve these systems?"}, {"start": 59.36, "end": 66.8, "text": " Well, scientists at Google had a fantastic idea. In this paper, they promise prompt-to-prommed editing."}, {"start": 67.6, "end": 74.08, "text": " What is that? What problem does this solve? Well, whenever we create an image and we feel"}, {"start": 74.08, "end": 81.75999999999999, "text": " mostly satisfied with it, but we would need to add just a little change to it. We cannot easily do that."}, {"start": 81.75999999999999, "end": 88.72, "text": " But now, have a look at five of my favorite examples of doing exactly this with this new method."}, {"start": 89.44, "end": 96.08, "text": " One, if we create this imaginary cat riding a bike and we are happy with this concept,"}, {"start": 96.08, "end": 103.03999999999999, "text": " but after taking some driving lessons, our little imaginary cat wishes to get a car now."}, {"start": 103.04, "end": 111.12, "text": " Well, now it is possible. Just change the prompt and get the same image with minimal modifications"}, {"start": 111.12, "end": 118.08000000000001, "text": " to satisfy the changes we have made. I love it. Interestingly, it has also become a bit of a"}, {"start": 118.08000000000001, "end": 123.92, "text": " chunker in the process, a testament to how healthy it is to ride the bike instead."}, {"start": 123.92, "end": 131.20000000000002, "text": " And two, if we are yearning for bigger changes, we can use a photo and change its style"}, {"start": 131.2, "end": 137.35999999999999, "text": " as if it were painted by a child. And I have to say, this one is very convincing."}, {"start": 138.0, "end": 145.83999999999997, "text": " Three, and now, hold onto your papers and behold the cake generator AI. Previously, if we created"}, {"start": 145.83999999999997, "end": 153.76, "text": " this lemon cake and wish to create other variants of it, for instance, a cheesecake or an apple cake,"}, {"start": 153.76, "end": 159.92, "text": " we got a completely different result. These variants don't have a great deal to do with the"}, {"start": 159.92, "end": 167.11999999999998, "text": " original photo. And I wonder, would it be possible with the new technique that, oh my goodness,"}, {"start": 167.11999999999998, "end": 174.95999999999998, "text": " yes, look at that. These cakes are not only delicious, but they are also a real variants of"}, {"start": 174.95999999999998, "end": 184.07999999999998, "text": " the original slice. Yum, this is fantastic. So, AI generated cuisine, huh? Sign me up right now."}, {"start": 184.08, "end": 191.60000000000002, "text": " Four, after generating a car at the side of the street, we can even say how we wish to change the"}, {"start": 191.60000000000002, "end": 198.88000000000002, "text": " car itself. For instance, let's make it a sports car instead. Or if we are happy with the original"}, {"start": 198.88000000000002, "end": 205.84, "text": " car, we can also ask the AI to leave the car intact and change its surroundings instead."}, {"start": 206.48000000000002, "end": 212.88000000000002, "text": " Let's put it on a flooded street. Or quickly, before water damage happens, put it in Manhattan"}, {"start": 212.88, "end": 219.84, "text": " instead. Excellent. Loving it. Now, of course, you see that not even this technique is perfect,"}, {"start": 219.84, "end": 226.07999999999998, "text": " the car still has changed a little, but that is something that will surely be addressed a couple"}, {"start": 226.07999999999998, "end": 233.28, "text": " more papers down the line. Five, we can even engage in mask-based editing. If we feel that this"}, {"start": 233.28, "end": 240.48, "text": " beautiful cat also deserves a beautiful shirt, we can delete this part of the screen, then the AI"}, {"start": 240.48, "end": 249.76, "text": " will start from a piece of noise and morph it until it becomes a shirt. How cool is that? So good."}, {"start": 249.76, "end": 255.6, "text": " It works for many different kinds of apparel too. And while we marvel at some more of these"}, {"start": 255.6, "end": 261.52, "text": " amazing examples, I would like to tell you one more thing that I loved about this paper."}, {"start": 261.52, "end": 268.4, "text": " And that is, it describes a general concept. Why is this super cool? Well, it is super cool"}, {"start": 268.4, "end": 274.15999999999997, "text": " because it can be applied to different image generators. If you look carefully here,"}, {"start": 274.15999999999997, "end": 280.0, "text": " you see that this concept was applied to Google's own closed solution, image here."}, {"start": 280.64, "end": 287.67999999999995, "text": " And I hope you know what's coming now. Oh, yes, a free and open source text to image synthesizer"}, {"start": 287.67999999999995, "end": 294.79999999999995, "text": " is also available and it goes by the name StableDefusion, we celebrated it coming into existence"}, {"start": 294.8, "end": 301.84000000000003, "text": " a few episodes ago. But why am I so excited about this? Well, with StableDefusion,"}, {"start": 301.84000000000003, "end": 308.8, "text": " we can finally take out our digital range and tinker with it. For instance, we can now adjust"}, {"start": 308.8, "end": 315.04, "text": " the internal parameters in ways that we cannot do with the closed solutions like Dolly II and"}, {"start": 315.04, "end": 321.92, "text": " Imogen. So let's have a look at why that matters. Do you see the prompts here? Of course you do."}, {"start": 321.92, "end": 330.8, "text": " Now, what else do you see? Parameters. Yes, this means that the hood is popped open, we can not"}, {"start": 330.8, "end": 338.48, "text": " only look into the inner workings of the AI, but we can also play with them. And thus, these results"}, {"start": 338.48, "end": 346.16, "text": " become reproducible at home. So much so that there is already an unofficial open source implementation"}, {"start": 346.16, "end": 352.64000000000004, "text": " of this new technique applied to StableDefusion. Both of these are free for everyone to run."}, {"start": 352.64000000000004, "end": 360.88000000000005, "text": " I am loving this. What a time to be alive. And once again, this showcases the power of the papers"}, {"start": 360.88000000000005, "end": 366.96000000000004, "text": " and the power of the community. The links are available in the video description. And for now,"}, {"start": 366.96000000000004, "end": 372.96000000000004, "text": " let the experiments begin. What you see here is a report of this exact paper we have talked about"}, {"start": 372.96, "end": 378.4, "text": " which was made by Wates and Biasis. I put a link to it in the description. Make sure to have a look."}, {"start": 378.4, "end": 384.47999999999996, "text": " I think it helps you understand this paper better. Wates and Biasis provides tools to track your"}, {"start": 384.47999999999996, "end": 390.4, "text": " experiments in your deep learning projects. Using their system, you can create beautiful reports"}, {"start": 390.4, "end": 396.64, "text": " like this one to explain your findings to your colleagues better. It is used by many prestigious labs"}, {"start": 396.64, "end": 403.91999999999996, "text": " including OpenAI, Toyota Research, GitHub and more. And the best part is that Wates and Biasis"}, {"start": 403.91999999999996, "end": 410.64, "text": " is free for all individuals, academics and open source projects. Make sure to visit them through"}, {"start": 410.64, "end": 418.32, "text": " www.nb.com slash papers or just click the link in the video description and you can get a free demo"}, {"start": 418.32, "end": 424.47999999999996, "text": " today. Our thanks to Wates and Biasis for their longstanding support and for helping us make"}, {"start": 424.48, "end": 429.92, "text": " better videos for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=H-pTZf1zsa8
Google’s New AI: Fly INTO Photos…But Deeper! 🐦
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper "InfiniteNature-Zero Learning Perpetual View Generation of Natural Scenes from Single Images" is available here: https://infinite-nature-zero.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos João Ifejir. Today we are going to use an AI to fly into this photo. And we will see how much better this technique has become in just one year. It will be insanity. Yes, in a previous video, we explored an insane idea. What if we could take just one photograph of a landscape and then we would fly into this photo like a bird? Of course, that is a big ask because to be able to do this, we would need to invent at least three things. One is image in painting. Look, when we start flying, the regions between the trees are missing. We need to generate those. Two, information is also missing not just within, but around the photo. This is a huge problem because completely new regions should also appear that are beyond the image. This means that we also need to perform image out painting, creating these new regions from scratch, continuing the image if you will. And three, as we fly closer to these new regions, we will be looking at fewer and fewer pixels and from closer and closer, which means this. For this, we would need super resolution. In goes a course image or video and this AI-based method is tasked with this. Yes, this is not science fiction. This is super resolution where the AI starts out from noise and synthesizes crisp details onto the image. So, last year, scientists at Google created an amazing AI that was able to learn and fuse all these three techniques together to create this. Wow, so this is possible after all. Well, hold on to your papers because it is not only possible, but the follow-up paper is already here. We are now one more paper down the line less than a year later. I know you're itching to see it. Me too. So, let's compare them together. These methods will all start from the same point and... Oh my! Look how quickly they deviate. The two earlier methods quickly break down and this is the work that we talked about a few weeks ago. This is clearly much better. However, as every single one of you fellow scholars can see, it lacks temporal coherence. What is that? Well, this means that the AI does not have a long-term vision of what it wishes to do and it barely remembers what it just did a few frames ago. As a result, these landscapes start morphing into something completely different very quickly. And now, you know what's coming, so hold on to your papers and let's look at the new technique. My goodness, I love it. So much improvement in just a year. Now, everyone can see that these are also not perfect, but this kind of improvement just one more paper down the line is nothing short of amazing, especially that it doesn't only have better quality. No, no, it can do even more. It offers us more control too. Now, we can turn the camera around and whenever we see something interesting, we can decide which direction we wish to go. And the result is that now we can create and also control these beautiful long-area videos where after the first few frames, every single thing is synthesized by an AI. How cool is that? What a time to be alive! And it doesn't stop there, it gets even better. If you have been holding on to your paper so far, now squeeze that paper because this new AI synthesizes these videos without ever having seen one. That's right, it had never seen a video. The previous work was trained on drone videos, but training this one only requires a collection of single photographs. Multiple views and the camera position are not required. That is insanity. This AI is so much smarter than the previous one that was published just a year ago, and it requires training data that is much easier to produce at the same time. And I wonder what will be capable of just two more papers down the line. So cool! This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and of course creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of OpenAI, Toyota Research, Samsung and many more prestigious labs. Make sure to use the link wnb.me slash paper intro or just click the link in the video description and try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos Jo\u00e3o Ifejir."}, {"start": 4.64, "end": 9.52, "text": " Today we are going to use an AI to fly into this photo."}, {"start": 9.52, "end": 15.200000000000001, "text": " And we will see how much better this technique has become in just one year."}, {"start": 15.200000000000001, "end": 17.36, "text": " It will be insanity."}, {"start": 17.36, "end": 21.6, "text": " Yes, in a previous video, we explored an insane idea."}, {"start": 21.6, "end": 25.6, "text": " What if we could take just one photograph of a landscape"}, {"start": 25.6, "end": 29.84, "text": " and then we would fly into this photo like a bird?"}, {"start": 29.84, "end": 33.92, "text": " Of course, that is a big ask because to be able to do this,"}, {"start": 33.92, "end": 37.36, "text": " we would need to invent at least three things."}, {"start": 37.36, "end": 39.6, "text": " One is image in painting."}, {"start": 39.6, "end": 44.24, "text": " Look, when we start flying, the regions between the trees are missing."}, {"start": 44.24, "end": 46.239999999999995, "text": " We need to generate those."}, {"start": 46.239999999999995, "end": 52.4, "text": " Two, information is also missing not just within, but around the photo."}, {"start": 52.4, "end": 57.519999999999996, "text": " This is a huge problem because completely new regions should also appear"}, {"start": 57.52, "end": 59.84, "text": " that are beyond the image."}, {"start": 59.84, "end": 64.0, "text": " This means that we also need to perform image out painting,"}, {"start": 64.0, "end": 69.12, "text": " creating these new regions from scratch, continuing the image if you will."}, {"start": 69.12, "end": 72.88, "text": " And three, as we fly closer to these new regions,"}, {"start": 72.88, "end": 76.08, "text": " we will be looking at fewer and fewer pixels"}, {"start": 76.08, "end": 79.52000000000001, "text": " and from closer and closer, which means this."}, {"start": 80.32000000000001, "end": 83.36, "text": " For this, we would need super resolution."}, {"start": 83.36, "end": 89.44, "text": " In goes a course image or video and this AI-based method is tasked with this."}, {"start": 90.0, "end": 92.8, "text": " Yes, this is not science fiction."}, {"start": 92.8, "end": 97.28, "text": " This is super resolution where the AI starts out from noise"}, {"start": 97.28, "end": 100.48, "text": " and synthesizes crisp details onto the image."}, {"start": 101.03999999999999, "end": 106.24, "text": " So, last year, scientists at Google created an amazing AI"}, {"start": 106.24, "end": 112.4, "text": " that was able to learn and fuse all these three techniques together to create this."}, {"start": 112.4, "end": 116.08000000000001, "text": " Wow, so this is possible after all."}, {"start": 116.08000000000001, "end": 120.56, "text": " Well, hold on to your papers because it is not only possible,"}, {"start": 120.56, "end": 123.68, "text": " but the follow-up paper is already here."}, {"start": 123.68, "end": 128.32, "text": " We are now one more paper down the line less than a year later."}, {"start": 128.32, "end": 130.4, "text": " I know you're itching to see it."}, {"start": 130.4, "end": 131.44, "text": " Me too."}, {"start": 131.44, "end": 133.76, "text": " So, let's compare them together."}, {"start": 133.76, "end": 137.52, "text": " These methods will all start from the same point and..."}, {"start": 137.52, "end": 138.96, "text": " Oh my!"}, {"start": 138.96, "end": 141.36, "text": " Look how quickly they deviate."}, {"start": 141.36, "end": 148.24, "text": " The two earlier methods quickly break down and this is the work that we talked about a few weeks ago."}, {"start": 148.24, "end": 150.88000000000002, "text": " This is clearly much better."}, {"start": 150.88000000000002, "end": 154.88000000000002, "text": " However, as every single one of you fellow scholars can see,"}, {"start": 154.88000000000002, "end": 157.12, "text": " it lacks temporal coherence."}, {"start": 157.12, "end": 158.72000000000003, "text": " What is that?"}, {"start": 158.72000000000003, "end": 164.72000000000003, "text": " Well, this means that the AI does not have a long-term vision of what it wishes to do"}, {"start": 164.72000000000003, "end": 169.04000000000002, "text": " and it barely remembers what it just did a few frames ago."}, {"start": 169.04, "end": 176.4, "text": " As a result, these landscapes start morphing into something completely different very quickly."}, {"start": 176.4, "end": 182.95999999999998, "text": " And now, you know what's coming, so hold on to your papers and let's look at the new technique."}, {"start": 182.95999999999998, "end": 186.07999999999998, "text": " My goodness, I love it."}, {"start": 186.07999999999998, "end": 189.2, "text": " So much improvement in just a year."}, {"start": 189.2, "end": 193.68, "text": " Now, everyone can see that these are also not perfect,"}, {"start": 193.68, "end": 197.92, "text": " but this kind of improvement just one more paper down the line"}, {"start": 197.92, "end": 203.35999999999999, "text": " is nothing short of amazing, especially that it doesn't only have better quality."}, {"start": 204.16, "end": 206.88, "text": " No, no, it can do even more."}, {"start": 206.88, "end": 209.6, "text": " It offers us more control too."}, {"start": 210.16, "end": 215.35999999999999, "text": " Now, we can turn the camera around and whenever we see something interesting,"}, {"start": 215.35999999999999, "end": 218.56, "text": " we can decide which direction we wish to go."}, {"start": 218.56, "end": 226.32, "text": " And the result is that now we can create and also control these beautiful long-area videos"}, {"start": 226.32, "end": 232.23999999999998, "text": " where after the first few frames, every single thing is synthesized by an AI."}, {"start": 232.23999999999998, "end": 234.23999999999998, "text": " How cool is that?"}, {"start": 234.23999999999998, "end": 236.16, "text": " What a time to be alive!"}, {"start": 236.16, "end": 239.92, "text": " And it doesn't stop there, it gets even better."}, {"start": 239.92, "end": 244.32, "text": " If you have been holding on to your paper so far, now squeeze that paper"}, {"start": 244.32, "end": 250.16, "text": " because this new AI synthesizes these videos without ever having seen one."}, {"start": 250.88, "end": 254.16, "text": " That's right, it had never seen a video."}, {"start": 254.16, "end": 257.04, "text": " The previous work was trained on drone videos,"}, {"start": 257.04, "end": 262.32, "text": " but training this one only requires a collection of single photographs."}, {"start": 262.32, "end": 266.4, "text": " Multiple views and the camera position are not required."}, {"start": 266.4, "end": 268.32, "text": " That is insanity."}, {"start": 268.32, "end": 274.32, "text": " This AI is so much smarter than the previous one that was published just a year ago,"}, {"start": 274.32, "end": 280.0, "text": " and it requires training data that is much easier to produce at the same time."}, {"start": 280.0, "end": 285.44, "text": " And I wonder what will be capable of just two more papers down the line."}, {"start": 285.44, "end": 286.32, "text": " So cool!"}, {"start": 286.32, "end": 289.84, "text": " This video has been supported by weights and biases."}, {"start": 289.84, "end": 294.4, "text": " Being a machine learning researcher means doing tons of experiments"}, {"start": 294.4, "end": 297.04, "text": " and of course creating tons of data."}, {"start": 297.68, "end": 302.0, "text": " But I am not looking for data, I am looking for insights."}, {"start": 302.0, "end": 305.36, "text": " And weights and biases helps with exactly that."}, {"start": 305.36, "end": 307.76, "text": " They have tools for experiment tracking,"}, {"start": 307.76, "end": 312.56, "text": " data set and model versioning and even hyper parameter optimization."}, {"start": 313.28, "end": 319.03999999999996, "text": " No wonder this is the experiment tracking tool choice of OpenAI, Toyota Research,"}, {"start": 319.03999999999996, "end": 322.4, "text": " Samsung and many more prestigious labs."}, {"start": 322.4, "end": 328.15999999999997, "text": " Make sure to use the link wnb.me slash paper intro"}, {"start": 328.15999999999997, "end": 330.8, "text": " or just click the link in the video description"}, {"start": 330.8, "end": 335.03999999999996, "text": " and try this 10 minute example of weights and biases today"}, {"start": 335.04, "end": 339.28000000000003, "text": " to experience the wonderful feeling of training a neural network"}, {"start": 339.28000000000003, "end": 341.84000000000003, "text": " and being in control of your experiments."}, {"start": 342.56, "end": 345.04, "text": " After you try it, you won't want to go back."}, {"start": 345.04, "end": 347.28000000000003, "text": " Thanks for watching and for your generous support,"}, {"start": 347.28, "end": 374.0, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=xYvJV_z5Sxc
Google’s New Self-Driving Robot Is Amazing! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action" is available below. Note that this is a collaboration between UC Berkeley, University of Warsaw, and Robotics at Google. https://sites.google.com/view/lmnav Website layout with GPT-3: https://twitter.com/sharifshameem/status/1283322990625607681 Image interpolation video with Stable Diffusion: https://twitter.com/xsteenbrugge/status/1558508866463219712 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-6394751/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today, we are going to give instructions to Google's new self-driving robot and see if it is smart enough to deal with these nasty tasks. Look at that! There is no way, right? Well, with our current tools, I am not so sure, but let's be ambitious and think about a possible dream scenario for a self-driving robot. For this, we need three puzzle pieces. You see, whenever we talk about self-driving techniques, the input is an image and the output is a question like, okay, self-driving car, what would you do next? But note that the input is not a sentence. These AIs don't speak English, so puzzle piece number one. Language. For a possible solution to that, have a look at OpenAI and Nvidia's AI that can play Minecraft and here comes the best part. They also understand English. How do we do that? Well, we can use OpenAI's earlier technique called GPT-3, which has read a huge part of the internet and get this, it now understands English. However, we are not done yet. Not even close. Puzzle piece number two. Images. If we wish it not only to be able to understand English, but to understand our instructions, it needs to be able to connect these English words to images. You see, it knows how to spell stop sign, but it hasn't the slightest clue what a stop sign really looks like. So, we need to add something that would make it understand what it sees. Fortunately, we can also do this, for instance, using the recent Dolly AI that took the world by storm by being able to generate these incredible images given our text descriptions. And this has a back and forth understanding between words and images. This module is called Clip, and it is a general puzzle piece that fortunately can be reused here too. But we are not done yet. Now, puzzle piece number three. Of course, we need self-driving, but make it really hard for this paper. For instance, train the AI on a large, unannotated, self-driving data set, which means that it does not get a great deal of help with the training. There is lots of data, but not a lot of instructions within this data. Then, finally, let's chuck all these three pieces into a robot and see if it became smart enough to be able to get around the real world. Of course, there is no way, right? That sounds like science fiction, and it might never happen. Well, I hope you know what's coming. Now, hold on to your papers and check out Google's new self-driving robot, which they say has all three of these puzzle pieces. So, let's take it out for a ride and see what it can do in two amazing experiments. First, let's keep things light. Experiment number one. Let's ask it to recognize and go to a wide building. Nice. It's found and recognized it. That's a good start. Then, continue until you see a wide truck. Now, continue until you find a stop sign. And we are done. This is incredible. We really have all three puzzle pieces working together, and this is an excellent application of the second law of papers, which says that everything is connected. This AI can drive itself, understands English, and can see and understand the world around it too. Wow. So now, are you thinking what I am thinking? Oh, yes. Let's give it a really hard time. Yes, final boss time. Look at this prompt. Experiment number two. Let's start following the red-black building until we see a fire hydrant. And, ha ha. Let's get a bit tricky. Yep. That's right. No fire hydrant anywhere to be seen. Your move, little robot. And look at that. It stops, turns around, and goes the other way, and finally finds that highly sought after fire hydrant. Very cool. Now, into the growth to the right. Okay, this might be it. And now, you are supposed to take a right when you see a manhole cover. No manhole cover yet, but there were ones earlier that it is supposed to ignore, so good job not falling for these. And now, the search continues. But no manhole cover. Still, not one inside. Wait. Got it. And now, to the right and find a trailer. No trailer anywhere to be seen. Are we even at the right place? This is getting very confusing. But with a little more perseverance. Yes, there is the trailer. Good job, little robot. This is absolutely incredible. I think this description could have confused many humans, yet the robot prevailed. All three puzzle pieces are working together really well. This looks like something straight out of a science fiction movie. And a moment that I really loved in this paper, look, this is a scientist who is trying to keep up with the robot in the meantime. Imagine this person going home and being asked, honey, what have you been doing today? Well, I follow the robot around all day. It was the best. How cool is that? So, with Dolly too and stable diffusion, we are definitely entering the age of AI driven image creation. And with DeepMind's AlphaFold, perhaps even the age of AI driven medicine, and we are getting closer and closer to the age of AI self-driving and navigation too. Loving it. What a time to be alive. And before we go, have a look at this. Oh yes, as of the making of this video, only 152 people have looked at this paper. And that dear fellow scholars is why too many papers exist. I am worried that if we don't talk about some of these works in these videos, almost no one will. So, thank you so much for coming along this journey with me. And if you feel that you would like more of this, please consider subscribing. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold onto your papers because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 4.64, "end": 10.48, "text": " Today, we are going to give instructions to Google's new self-driving robot"}, {"start": 10.48, "end": 15.200000000000001, "text": " and see if it is smart enough to deal with these nasty tasks."}, {"start": 15.200000000000001, "end": 19.04, "text": " Look at that! There is no way, right?"}, {"start": 19.04, "end": 25.92, "text": " Well, with our current tools, I am not so sure, but let's be ambitious and think about"}, {"start": 25.92, "end": 30.400000000000002, "text": " a possible dream scenario for a self-driving robot."}, {"start": 30.400000000000002, "end": 33.52, "text": " For this, we need three puzzle pieces."}, {"start": 33.52, "end": 38.400000000000006, "text": " You see, whenever we talk about self-driving techniques, the input is an image"}, {"start": 38.400000000000006, "end": 45.120000000000005, "text": " and the output is a question like, okay, self-driving car, what would you do next?"}, {"start": 45.120000000000005, "end": 48.88, "text": " But note that the input is not a sentence."}, {"start": 48.88, "end": 54.160000000000004, "text": " These AIs don't speak English, so puzzle piece number one."}, {"start": 54.16, "end": 61.36, "text": " Language. For a possible solution to that, have a look at OpenAI and Nvidia's AI that can play"}, {"start": 61.36, "end": 67.6, "text": " Minecraft and here comes the best part. They also understand English."}, {"start": 67.6, "end": 74.16, "text": " How do we do that? Well, we can use OpenAI's earlier technique called GPT-3,"}, {"start": 74.16, "end": 81.12, "text": " which has read a huge part of the internet and get this, it now understands English."}, {"start": 81.12, "end": 85.36, "text": " However, we are not done yet. Not even close."}, {"start": 85.36, "end": 92.56, "text": " Puzzle piece number two. Images. If we wish it not only to be able to understand English,"}, {"start": 92.56, "end": 99.60000000000001, "text": " but to understand our instructions, it needs to be able to connect these English words to images."}, {"start": 99.60000000000001, "end": 106.88000000000001, "text": " You see, it knows how to spell stop sign, but it hasn't the slightest clue what a stop sign"}, {"start": 106.88, "end": 112.8, "text": " really looks like. So, we need to add something that would make it understand what it sees."}, {"start": 113.44, "end": 120.88, "text": " Fortunately, we can also do this, for instance, using the recent Dolly AI that took the world by storm"}, {"start": 120.88, "end": 126.47999999999999, "text": " by being able to generate these incredible images given our text descriptions."}, {"start": 126.47999999999999, "end": 131.84, "text": " And this has a back and forth understanding between words and images."}, {"start": 131.84, "end": 138.32, "text": " This module is called Clip, and it is a general puzzle piece that fortunately can be"}, {"start": 138.32, "end": 146.48000000000002, "text": " reused here too. But we are not done yet. Now, puzzle piece number three. Of course, we need"}, {"start": 146.48000000000002, "end": 153.36, "text": " self-driving, but make it really hard for this paper. For instance, train the AI on a large,"}, {"start": 153.36, "end": 159.28, "text": " unannotated, self-driving data set, which means that it does not get a great deal of help with the"}, {"start": 159.28, "end": 165.12, "text": " training. There is lots of data, but not a lot of instructions within this data."}, {"start": 165.76, "end": 173.52, "text": " Then, finally, let's chuck all these three pieces into a robot and see if it became smart enough"}, {"start": 173.52, "end": 180.0, "text": " to be able to get around the real world. Of course, there is no way, right? That sounds like"}, {"start": 180.0, "end": 186.24, "text": " science fiction, and it might never happen. Well, I hope you know what's coming. Now,"}, {"start": 186.24, "end": 194.16, "text": " hold on to your papers and check out Google's new self-driving robot, which they say has all three"}, {"start": 194.16, "end": 201.68, "text": " of these puzzle pieces. So, let's take it out for a ride and see what it can do in two amazing"}, {"start": 201.68, "end": 209.52, "text": " experiments. First, let's keep things light. Experiment number one. Let's ask it to recognize and go"}, {"start": 209.52, "end": 218.0, "text": " to a wide building. Nice. It's found and recognized it. That's a good start. Then, continue until"}, {"start": 218.0, "end": 229.76000000000002, "text": " you see a wide truck. Now, continue until you find a stop sign. And we are done. This is incredible."}, {"start": 229.76000000000002, "end": 236.08, "text": " We really have all three puzzle pieces working together, and this is an excellent application of"}, {"start": 236.08, "end": 243.04000000000002, "text": " the second law of papers, which says that everything is connected. This AI can drive itself,"}, {"start": 243.04000000000002, "end": 250.88000000000002, "text": " understands English, and can see and understand the world around it too. Wow. So now,"}, {"start": 250.88000000000002, "end": 258.48, "text": " are you thinking what I am thinking? Oh, yes. Let's give it a really hard time. Yes, final boss time."}, {"start": 258.48, "end": 266.56, "text": " Look at this prompt. Experiment number two. Let's start following the red-black building until we see"}, {"start": 266.56, "end": 275.92, "text": " a fire hydrant. And, ha ha. Let's get a bit tricky. Yep. That's right. No fire hydrant anywhere to be"}, {"start": 275.92, "end": 285.20000000000005, "text": " seen. Your move, little robot. And look at that. It stops, turns around, and goes the other way,"}, {"start": 285.2, "end": 292.96, "text": " and finally finds that highly sought after fire hydrant. Very cool. Now, into the growth to the right."}, {"start": 294.24, "end": 300.56, "text": " Okay, this might be it. And now, you are supposed to take a right when you see a manhole cover."}, {"start": 301.12, "end": 308.4, "text": " No manhole cover yet, but there were ones earlier that it is supposed to ignore, so good job"}, {"start": 308.4, "end": 317.59999999999997, "text": " not falling for these. And now, the search continues. But no manhole cover. Still, not one inside."}, {"start": 318.79999999999995, "end": 323.84, "text": " Wait. Got it. And now, to the right and find a trailer."}, {"start": 325.52, "end": 333.2, "text": " No trailer anywhere to be seen. Are we even at the right place? This is getting very confusing."}, {"start": 333.2, "end": 339.92, "text": " But with a little more perseverance. Yes, there is the trailer. Good job, little robot."}, {"start": 339.92, "end": 346.71999999999997, "text": " This is absolutely incredible. I think this description could have confused many humans,"}, {"start": 346.71999999999997, "end": 353.36, "text": " yet the robot prevailed. All three puzzle pieces are working together really well. This looks like"}, {"start": 353.36, "end": 359.91999999999996, "text": " something straight out of a science fiction movie. And a moment that I really loved in this paper,"}, {"start": 359.92, "end": 366.48, "text": " look, this is a scientist who is trying to keep up with the robot in the meantime. Imagine this"}, {"start": 366.48, "end": 373.44, "text": " person going home and being asked, honey, what have you been doing today? Well, I follow the robot"}, {"start": 373.44, "end": 381.52000000000004, "text": " around all day. It was the best. How cool is that? So, with Dolly too and stable diffusion,"}, {"start": 381.52000000000004, "end": 387.20000000000005, "text": " we are definitely entering the age of AI driven image creation. And with DeepMind's"}, {"start": 387.2, "end": 394.24, "text": " AlphaFold, perhaps even the age of AI driven medicine, and we are getting closer and closer"}, {"start": 394.24, "end": 401.68, "text": " to the age of AI self-driving and navigation too. Loving it. What a time to be alive."}, {"start": 402.24, "end": 410.88, "text": " And before we go, have a look at this. Oh yes, as of the making of this video, only 152 people"}, {"start": 410.88, "end": 417.76, "text": " have looked at this paper. And that dear fellow scholars is why too many papers exist. I am"}, {"start": 417.76, "end": 423.04, "text": " worried that if we don't talk about some of these works in these videos, almost no one will."}, {"start": 423.6, "end": 429.2, "text": " So, thank you so much for coming along this journey with me. And if you feel that you would like"}, {"start": 429.2, "end": 436.32, "text": " more of this, please consider subscribing. If you're looking for inexpensive cloud GPUs for AI,"}, {"start": 436.32, "end": 444.48, "text": " Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation"}, {"start": 444.48, "end": 452.24, "text": " required. Just sign up and launch an instance. And hold onto your papers because with Lambda GPU"}, {"start": 452.24, "end": 462.88, "text": " cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 462.88, "end": 471.04, "text": " That's 73% savings. Did I mention they also offer persistent storage? So join researchers at"}, {"start": 471.04, "end": 479.6, "text": " organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations, or servers."}, {"start": 479.6, "end": 486.88, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 486.88, "end": 496.88, "text": " today. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=NnoTWZ9qgYg
Google's New AI: Dog Goes In, Statue Comes Out! 🗽
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation" is available here: https://dreambooth.github.io/ Try it out: 1. https://huggingface.co/sd-dreambooth-library 2. https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipyn AI Image interpolation: https://twitter.com/xsteenbrugge/status/1558508866463219712 Felícia Zsolnai-Fehér’s works: https://twitter.com/twominutepapers/status/1534817417238614017 Judit Somogyvári’s works: https://www.artstation.com/sheyenne https://www.instagram.com/somogyvari.art/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute PaperSuite Dr. Karo Zsolnai-Fehir. Today, you are going to see how Google's new AI just supercharged art generation. Again, yes, this year we are entering the age of AI-based art generation. Open AI's Dolly2 technique is able to take a piece of text from us and generate a stunning image that matches this description. Stable diffusion, a similar but open source technique, is also now available for everyone to use. And the results are so good that artists are already using these around the world to create illustrations for a novel, texture synthesis for virtual worlds, product design and more. So, are we done here? Is there nothing else to improve other than the visual fidelity of the results? Well, not quite. Have a look at two of my favorite AI generated images, this scholar who is desperately trying to hold on to his papers. So I am very happy with this image, but imagine if we were creating a comic, we would need more images of this chap doing other things. Can we do that? Well, we are experienced fellow scholars over here, so we know that very in generation comes to the rescue. Well, have a look. Clearly, the AI has an understanding of the image and can create somewhat similar images. Let's see, we get someone with a similar beard, similar paper, which is similarly on fire. The fact that the AI can have a look at such an image and create variants is a miracle of science. But, not quite what we are looking for. Why is that? Well, of course, this is a new scholar. And we are looking for the previous scholar doing other things. Now, let's try our Fox scientist too and see if this was maybe just anonymally. Maybe Dolly too is just not into our scholarly content. Let's see. Well, once again, the results are pretty good. It understood that the huge ears, gloves, lap coat and the tie are important elements of the image, but ultimately, this is a different Fox scientist in a different style. So, no more adventures for these scientists, right? Have we lost them forever? Should we give up? Well, not so fast. Have a look at Google's new technique which promises a solution to this challenging problem. Yes, they promise that if we are able to take about four images of our subject, for instance, this good boy here, it will be able to synthesize completely new images with them. Let's see. Yes, this is the same dog in the Acropolis. That is excellent. However, wait a minute. This photo is very similar to this one. So basically, there was a little synthesis going on here, but otherwise, changing the background carries the show here. That is not new. We can do that with already existing tools anyway. So is that it? Well, hold on to your papers because the answer is no. It goes beyond changing backgrounds. It goes so far beyond that that I don't even know where to start. For instance, here is our little doggy swimming, sleeping in a bucket, and we can even give this good boy a haircut. That is absolutely insane. And all of these new situations were synthesized by using only four input photos. Wow, now we're talking. Similarly, if we have a pair of stylish sunglasses, we can ask a bear to wear it, make a cool product photo out of it, or put it in front of the Eiffel Tower. And as I am a light transport researcher by trade, I have to note that even secondary effects like the reflection of the glasses here are modeled really well. And so are the reflections here. I love it. But it can do so much more than this. Here are five of my favorite examples from the paper. One, we can not only put our favorite teapot into different contexts or see it in use, but we can even reimagine an otherwise opaque object and see what it would look like if it were made of a transparent material like glass. I love it. And I bet that product design people will also love it too. Two, we can create our transitions of our test subject. Here the input is only three photos of a dog, but the output. The output is priceless. We can commission art transitions from legendary artists of the past and all this nearly for free. How cool is that? Three, I hope you like this teapot property modification concept because we are going to push it quite a bit further. For instance, before repainting our car, we can have a closer look at what it would look like and we can also reimagine our favorite little pets as other animals. Which one is your favorite? Let me know in the comments below. Mine is the hippo. It has to be the hippo. Look at how adorable it is. And if even that is not enough, for we can also as the AI to reimagine our little dog as a chef, a nurse, a police dog and many others. And all of these images are fantastic. And finally, five, it can perform no less than view synthesis too. We know from previous papers that this is quite challenging and this one only requests four photos of our cat. And of course, if the cat is refusing to turn the right direction, which happens basically every time, well, no matter. The AI can recenterize an image of it, looking into our desired directions. This one looks like something straight out of a science fiction movie. What a time to be alive. And this is a huge step forward. You see, previous techniques typically struggled with this even in the best cases, the fidelity of the results suffer. This alarm clock is not the same as our input photo was. And neither is this one. And with the new technique, now we're talking. Bravo Google. And once again, don't forget, the first lot of papers is on full display here. This huge leap happened just one more paper down the line. I am truly stunned by these results and I cannot even imagine what we will be able to do two more papers down the line. So what do you think? What would you use this for? Let me know in the comments below. This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Reconnected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is 2 Minute PaperSuite Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 11.120000000000001, "text": " Today, you are going to see how Google's new AI just supercharged art generation."}, {"start": 11.120000000000001, "end": 17.400000000000002, "text": " Again, yes, this year we are entering the age of AI-based art generation."}, {"start": 17.400000000000002, "end": 22.88, "text": " Open AI's Dolly2 technique is able to take a piece of text from us"}, {"start": 22.88, "end": 27.76, "text": " and generate a stunning image that matches this description."}, {"start": 27.76, "end": 35.56, "text": " Stable diffusion, a similar but open source technique, is also now available for everyone to use."}, {"start": 35.56, "end": 41.32, "text": " And the results are so good that artists are already using these around the world"}, {"start": 41.32, "end": 49.120000000000005, "text": " to create illustrations for a novel, texture synthesis for virtual worlds, product design and more."}, {"start": 49.120000000000005, "end": 51.24, "text": " So, are we done here?"}, {"start": 51.24, "end": 56.32000000000001, "text": " Is there nothing else to improve other than the visual fidelity of the results?"}, {"start": 56.32, "end": 58.56, "text": " Well, not quite."}, {"start": 58.56, "end": 65.16, "text": " Have a look at two of my favorite AI generated images, this scholar who is desperately trying"}, {"start": 65.16, "end": 67.28, "text": " to hold on to his papers."}, {"start": 67.28, "end": 74.0, "text": " So I am very happy with this image, but imagine if we were creating a comic, we would need"}, {"start": 74.0, "end": 78.03999999999999, "text": " more images of this chap doing other things."}, {"start": 78.03999999999999, "end": 79.03999999999999, "text": " Can we do that?"}, {"start": 79.03999999999999, "end": 84.96000000000001, "text": " Well, we are experienced fellow scholars over here, so we know that very in generation"}, {"start": 84.96, "end": 86.72, "text": " comes to the rescue."}, {"start": 86.72, "end": 89.24, "text": " Well, have a look."}, {"start": 89.24, "end": 96.03999999999999, "text": " Clearly, the AI has an understanding of the image and can create somewhat similar images."}, {"start": 96.03999999999999, "end": 104.16, "text": " Let's see, we get someone with a similar beard, similar paper, which is similarly on fire."}, {"start": 104.16, "end": 110.36, "text": " The fact that the AI can have a look at such an image and create variants is a miracle"}, {"start": 110.36, "end": 111.83999999999999, "text": " of science."}, {"start": 111.83999999999999, "end": 114.91999999999999, "text": " But, not quite what we are looking for."}, {"start": 114.92, "end": 115.92, "text": " Why is that?"}, {"start": 115.92, "end": 119.8, "text": " Well, of course, this is a new scholar."}, {"start": 119.8, "end": 123.68, "text": " And we are looking for the previous scholar doing other things."}, {"start": 123.68, "end": 131.12, "text": " Now, let's try our Fox scientist too and see if this was maybe just anonymally."}, {"start": 131.12, "end": 135.64, "text": " Maybe Dolly too is just not into our scholarly content."}, {"start": 135.64, "end": 136.64, "text": " Let's see."}, {"start": 136.64, "end": 140.56, "text": " Well, once again, the results are pretty good."}, {"start": 140.56, "end": 147.44, "text": " It understood that the huge ears, gloves, lap coat and the tie are important elements"}, {"start": 147.44, "end": 154.24, "text": " of the image, but ultimately, this is a different Fox scientist in a different style."}, {"start": 154.24, "end": 158.28, "text": " So, no more adventures for these scientists, right?"}, {"start": 158.28, "end": 160.16, "text": " Have we lost them forever?"}, {"start": 160.16, "end": 161.2, "text": " Should we give up?"}, {"start": 161.2, "end": 163.48000000000002, "text": " Well, not so fast."}, {"start": 163.48000000000002, "end": 167.96, "text": " Have a look at Google's new technique which promises a solution to this challenging"}, {"start": 167.96, "end": 168.96, "text": " problem."}, {"start": 168.96, "end": 175.56, "text": " Yes, they promise that if we are able to take about four images of our subject, for instance,"}, {"start": 175.56, "end": 181.28, "text": " this good boy here, it will be able to synthesize completely new images with them."}, {"start": 181.28, "end": 182.28, "text": " Let's see."}, {"start": 182.28, "end": 186.60000000000002, "text": " Yes, this is the same dog in the Acropolis."}, {"start": 186.60000000000002, "end": 188.0, "text": " That is excellent."}, {"start": 188.0, "end": 190.36, "text": " However, wait a minute."}, {"start": 190.36, "end": 193.56, "text": " This photo is very similar to this one."}, {"start": 193.56, "end": 200.52, "text": " So basically, there was a little synthesis going on here, but otherwise, changing the background"}, {"start": 200.52, "end": 202.4, "text": " carries the show here."}, {"start": 202.4, "end": 203.96, "text": " That is not new."}, {"start": 203.96, "end": 207.56, "text": " We can do that with already existing tools anyway."}, {"start": 207.56, "end": 209.2, "text": " So is that it?"}, {"start": 209.2, "end": 213.12, "text": " Well, hold on to your papers because the answer is no."}, {"start": 213.12, "end": 215.92000000000002, "text": " It goes beyond changing backgrounds."}, {"start": 215.92000000000002, "end": 220.84, "text": " It goes so far beyond that that I don't even know where to start."}, {"start": 220.84, "end": 228.16, "text": " For instance, here is our little doggy swimming, sleeping in a bucket, and we can even give"}, {"start": 228.16, "end": 230.52, "text": " this good boy a haircut."}, {"start": 230.52, "end": 233.28, "text": " That is absolutely insane."}, {"start": 233.28, "end": 239.28, "text": " And all of these new situations were synthesized by using only four input photos."}, {"start": 239.28, "end": 241.92000000000002, "text": " Wow, now we're talking."}, {"start": 241.92000000000002, "end": 249.08, "text": " Similarly, if we have a pair of stylish sunglasses, we can ask a bear to wear it, make a cool product"}, {"start": 249.08, "end": 254.24, "text": " photo out of it, or put it in front of the Eiffel Tower."}, {"start": 254.24, "end": 259.56, "text": " And as I am a light transport researcher by trade, I have to note that even secondary"}, {"start": 259.56, "end": 264.92, "text": " effects like the reflection of the glasses here are modeled really well."}, {"start": 264.92, "end": 267.32, "text": " And so are the reflections here."}, {"start": 267.32, "end": 269.08000000000004, "text": " I love it."}, {"start": 269.08000000000004, "end": 271.96000000000004, "text": " But it can do so much more than this."}, {"start": 271.96000000000004, "end": 275.76, "text": " Here are five of my favorite examples from the paper."}, {"start": 275.76, "end": 282.12, "text": " One, we can not only put our favorite teapot into different contexts or see it in use,"}, {"start": 282.12, "end": 288.44, "text": " but we can even reimagine an otherwise opaque object and see what it would look like if"}, {"start": 288.44, "end": 292.08, "text": " it were made of a transparent material like glass."}, {"start": 292.08, "end": 293.76, "text": " I love it."}, {"start": 293.76, "end": 298.15999999999997, "text": " And I bet that product design people will also love it too."}, {"start": 298.15999999999997, "end": 302.48, "text": " Two, we can create our transitions of our test subject."}, {"start": 302.48, "end": 307.64000000000004, "text": " Here the input is only three photos of a dog, but the output."}, {"start": 307.64000000000004, "end": 309.64000000000004, "text": " The output is priceless."}, {"start": 309.64000000000004, "end": 316.04, "text": " We can commission art transitions from legendary artists of the past and all this nearly for"}, {"start": 316.04, "end": 317.04, "text": " free."}, {"start": 317.04, "end": 318.52000000000004, "text": " How cool is that?"}, {"start": 318.52000000000004, "end": 324.16, "text": " Three, I hope you like this teapot property modification concept because we are going to"}, {"start": 324.16, "end": 326.40000000000003, "text": " push it quite a bit further."}, {"start": 326.40000000000003, "end": 331.76, "text": " For instance, before repainting our car, we can have a closer look at what it would look"}, {"start": 331.76, "end": 337.92, "text": " like and we can also reimagine our favorite little pets as other animals."}, {"start": 337.92, "end": 339.71999999999997, "text": " Which one is your favorite?"}, {"start": 339.71999999999997, "end": 341.84, "text": " Let me know in the comments below."}, {"start": 341.84, "end": 343.2, "text": " Mine is the hippo."}, {"start": 343.2, "end": 345.03999999999996, "text": " It has to be the hippo."}, {"start": 345.03999999999996, "end": 347.2, "text": " Look at how adorable it is."}, {"start": 347.2, "end": 353.68, "text": " And if even that is not enough, for we can also as the AI to reimagine our little dog"}, {"start": 353.68, "end": 358.92, "text": " as a chef, a nurse, a police dog and many others."}, {"start": 358.92, "end": 362.16, "text": " And all of these images are fantastic."}, {"start": 362.16, "end": 368.24, "text": " And finally, five, it can perform no less than view synthesis too."}, {"start": 368.24, "end": 373.64000000000004, "text": " We know from previous papers that this is quite challenging and this one only requests"}, {"start": 373.64000000000004, "end": 375.96000000000004, "text": " four photos of our cat."}, {"start": 375.96000000000004, "end": 382.24, "text": " And of course, if the cat is refusing to turn the right direction, which happens basically"}, {"start": 382.24, "end": 385.16, "text": " every time, well, no matter."}, {"start": 385.16, "end": 391.52000000000004, "text": " The AI can recenterize an image of it, looking into our desired directions."}, {"start": 391.52000000000004, "end": 396.28000000000003, "text": " This one looks like something straight out of a science fiction movie."}, {"start": 396.28000000000003, "end": 398.04, "text": " What a time to be alive."}, {"start": 398.04, "end": 400.32000000000005, "text": " And this is a huge step forward."}, {"start": 400.32000000000005, "end": 405.48, "text": " You see, previous techniques typically struggled with this even in the best cases, the"}, {"start": 405.48, "end": 408.16, "text": " fidelity of the results suffer."}, {"start": 408.16, "end": 412.8, "text": " This alarm clock is not the same as our input photo was."}, {"start": 412.8, "end": 414.8, "text": " And neither is this one."}, {"start": 414.8, "end": 417.72, "text": " And with the new technique, now we're talking."}, {"start": 417.72, "end": 419.40000000000003, "text": " Bravo Google."}, {"start": 419.40000000000003, "end": 424.84000000000003, "text": " And once again, don't forget, the first lot of papers is on full display here."}, {"start": 424.84000000000003, "end": 429.0, "text": " This huge leap happened just one more paper down the line."}, {"start": 429.0, "end": 435.88, "text": " I am truly stunned by these results and I cannot even imagine what we will be able to do"}, {"start": 435.88, "end": 438.0, "text": " two more papers down the line."}, {"start": 438.0, "end": 439.84000000000003, "text": " So what do you think?"}, {"start": 439.84000000000003, "end": 441.72, "text": " What would you use this for?"}, {"start": 441.72, "end": 443.40000000000003, "text": " Let me know in the comments below."}, {"start": 443.4, "end": 446.56, "text": " This video has been supported by weights and biases."}, {"start": 446.56, "end": 451.64, "text": " Check out the recent offering fully connected, a place where they bring machine learning"}, {"start": 451.64, "end": 458.47999999999996, "text": " practitioners together to share and discuss their ideas, learn from industry leaders, and"}, {"start": 458.47999999999996, "end": 461.23999999999995, "text": " even collaborate on projects together."}, {"start": 461.23999999999995, "end": 466.23999999999995, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by the"}, {"start": 466.23999999999995, "end": 470.64, "text": " series, but don't really know where to start."}, {"start": 470.64, "end": 472.2, "text": " And here it is."}, {"start": 472.2, "end": 477.88, "text": " Reconnected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 477.88, "end": 481.76, "text": " get your papers accepted to a conference, and more."}, {"start": 481.76, "end": 489.0, "text": " Make sure to visit them through wnb.me slash papers or just click the link in the video description."}, {"start": 489.0, "end": 494.03999999999996, "text": " Our thanks to weights and biases for their longstanding support and for helping us make"}, {"start": 494.03999999999996, "end": 495.4, "text": " better videos for you."}, {"start": 495.4, "end": 502.56, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bVxS9RXt2q8
NVIDIA’s New AI: Beautiful Simulations, Cheaper! 💨
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "NeuralVDB: High-resolution Sparse Volume Representation using Hierarchical Neural Networks" is available here: https://developer.nvidia.com/rendering-technologies/neuralvdb https://blogs.nvidia.com/blog/2022/08/09/neuralvdb-ai/ https://arxiv.org/abs/2208.04448 📝 The paper with the water simulation is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Kato Jean-Lay Fahir. Today, we are going to talk about an amazing new technique to make these beautiful but very expensive computer simulations cheaper. How much cheaper? Well, I will tell you in a moment. You see, around here, we discuss research papers like this and this. We talk a great deal about how difficult and computationally expensive it is to simulate all these physics, but that's not the only thing that needs to be done here. All this data also has to be stored too. That is a ton of geometry information. How much exactly? Well, 15 gigabytes for a simulation is not uncommon and my fluid simulation that you see here took well over 100 gigabytes of space to store. Some of the more extreme examples can feel several hard drives for only a minute of physics simulation data. That is an insane amount of data. So, we need something to help compressing down all these data. And the weapon of choice for working with this kind of volumetric data is often a tool called OpenVDB. It has this hierarchical structure within and scientists at Nvidia had a crazy idea. They said, how about exchanging this with all of these learning-based methods that you are hearing about everywhere? Well, okay, but why just throw a neural network at the problem? Is this really going to help? Well, maybe. You see, neural networks have exceptional compression capabilities. For instance, we have seen with their earlier paper that such an AI can help us transmit video data of us by get this. Only taking the first image from the video and just a tiny bit of extra data and they throw away the entire video afterwards. And it still works. So, can these techniques do some more magic in the area of physics simulations? Those are much more complex, but who knows? Let's see. And now, hold on to your papers because it looks like this. Now, that is all well and good, but here's the kicker. It takes about 95% less storage than the previous method. Yes, that is a 20X compression, while the two pieces of footage still look the same. Wow! That is absolutely incredible. And the previous OpenVDB solution is not just for some hobby projects. It is used in many of the blockbuster feature length movies you all know and love. These movies won many, many Academy Awards. And we are not done yet. Not even close. It gets even better. If you think 20X is the best it can do, now look, we also have 50X here. And sometimes it gets up to even 60 times smaller. My goodness. And this new neural VDB technique from Nvidia can be used for not only these amazing movies, but for everything else, OpenVDB was useful for. Oh yes, that means that we can go beyond these movies and use it for scientific and even industrial visualizations. They also have the machinery to read and process all this data with a graphics card. Therefore, it not only decreases the memory footprint of these techniques, but it even improves how quickly they run. My mind is officially blown. So much progress in so little time. And this is my favorite kind of research. And that is when we look into the intersection of computer graphics and AI. That is where we find some of the most beautiful works. And don't forget, these applications are plentiful. This could also be used with ForkastNet, which is a physics model that can predict outlier weather events. The coolest part about this work is that it runs not in a data center anymore, but today it runs on just one Nvidia graphics card. And with neural VDB, it will require even less memory to do so. And it will help with all of these absolute miracles of science that you see here too. So, all of these could run with an up to 60 times smaller memory footprint. That is incredible. What a time to be alive! Huge congratulations to Doyub Kim, the first author of the paper and his team for this amazing achievement. And once again, you see the power of AI, the power of the papers, and tech transfer at the same time. And since Nvidia has an excellent record in putting these tools into the hands of all of us, for instance, they did it with the previous incarnation of this technique, nano VDB. I am convinced that we are all going to be able to enjoy this amazing paper soon. How cool is that? So, does this get your mind going? What would you use this for? Let me know in the comments below. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold on to your papers because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage, so join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Kato Jean-Lay Fahir."}, {"start": 4.64, "end": 14.36, "text": " Today, we are going to talk about an amazing new technique to make these beautiful but very expensive computer simulations cheaper."}, {"start": 14.36, "end": 18.16, "text": " How much cheaper? Well, I will tell you in a moment."}, {"start": 18.16, "end": 23.56, "text": " You see, around here, we discuss research papers like this and this."}, {"start": 23.56, "end": 34.6, "text": " We talk a great deal about how difficult and computationally expensive it is to simulate all these physics, but that's not the only thing that needs to be done here."}, {"start": 34.6, "end": 41.239999999999995, "text": " All this data also has to be stored too. That is a ton of geometry information."}, {"start": 41.24, "end": 54.0, "text": " How much exactly? Well, 15 gigabytes for a simulation is not uncommon and my fluid simulation that you see here took well over 100 gigabytes of space to store."}, {"start": 54.0, "end": 64.24000000000001, "text": " Some of the more extreme examples can feel several hard drives for only a minute of physics simulation data. That is an insane amount of data."}, {"start": 64.24, "end": 77.0, "text": " So, we need something to help compressing down all these data. And the weapon of choice for working with this kind of volumetric data is often a tool called OpenVDB."}, {"start": 77.0, "end": 83.88, "text": " It has this hierarchical structure within and scientists at Nvidia had a crazy idea."}, {"start": 83.88, "end": 91.47999999999999, "text": " They said, how about exchanging this with all of these learning-based methods that you are hearing about everywhere?"}, {"start": 91.48, "end": 100.32000000000001, "text": " Well, okay, but why just throw a neural network at the problem? Is this really going to help? Well, maybe."}, {"start": 100.32000000000001, "end": 105.24000000000001, "text": " You see, neural networks have exceptional compression capabilities."}, {"start": 105.24000000000001, "end": 114.76, "text": " For instance, we have seen with their earlier paper that such an AI can help us transmit video data of us by get this."}, {"start": 114.76, "end": 125.64, "text": " Only taking the first image from the video and just a tiny bit of extra data and they throw away the entire video afterwards."}, {"start": 125.64, "end": 133.32, "text": " And it still works. So, can these techniques do some more magic in the area of physics simulations?"}, {"start": 133.32, "end": 137.96, "text": " Those are much more complex, but who knows? Let's see."}, {"start": 137.96, "end": 146.24, "text": " And now, hold on to your papers because it looks like this. Now, that is all well and good, but here's the kicker."}, {"start": 146.24, "end": 158.28, "text": " It takes about 95% less storage than the previous method. Yes, that is a 20X compression, while the two pieces of footage still look the same."}, {"start": 158.28, "end": 161.88, "text": " Wow! That is absolutely incredible."}, {"start": 161.88, "end": 172.84, "text": " And the previous OpenVDB solution is not just for some hobby projects. It is used in many of the blockbuster feature length movies you all know and love."}, {"start": 172.84, "end": 181.64, "text": " These movies won many, many Academy Awards. And we are not done yet. Not even close. It gets even better."}, {"start": 181.64, "end": 194.04, "text": " If you think 20X is the best it can do, now look, we also have 50X here. And sometimes it gets up to even 60 times smaller."}, {"start": 194.04, "end": 207.23999999999998, "text": " My goodness. And this new neural VDB technique from Nvidia can be used for not only these amazing movies, but for everything else, OpenVDB was useful for."}, {"start": 207.24, "end": 217.24, "text": " Oh yes, that means that we can go beyond these movies and use it for scientific and even industrial visualizations."}, {"start": 217.24, "end": 223.0, "text": " They also have the machinery to read and process all this data with a graphics card."}, {"start": 223.0, "end": 231.4, "text": " Therefore, it not only decreases the memory footprint of these techniques, but it even improves how quickly they run."}, {"start": 231.4, "end": 239.64000000000001, "text": " My mind is officially blown. So much progress in so little time. And this is my favorite kind of research."}, {"start": 239.64000000000001, "end": 245.32, "text": " And that is when we look into the intersection of computer graphics and AI."}, {"start": 245.32, "end": 249.32, "text": " That is where we find some of the most beautiful works."}, {"start": 249.32, "end": 261.0, "text": " And don't forget, these applications are plentiful. This could also be used with ForkastNet, which is a physics model that can predict outlier weather events."}, {"start": 261.0, "end": 271.08, "text": " The coolest part about this work is that it runs not in a data center anymore, but today it runs on just one Nvidia graphics card."}, {"start": 271.08, "end": 276.12, "text": " And with neural VDB, it will require even less memory to do so."}, {"start": 276.12, "end": 282.04, "text": " And it will help with all of these absolute miracles of science that you see here too."}, {"start": 282.04, "end": 289.8, "text": " So, all of these could run with an up to 60 times smaller memory footprint. That is incredible."}, {"start": 289.8, "end": 291.8, "text": " What a time to be alive!"}, {"start": 291.8, "end": 298.84000000000003, "text": " Huge congratulations to Doyub Kim, the first author of the paper and his team for this amazing achievement."}, {"start": 298.84000000000003, "end": 307.32, "text": " And once again, you see the power of AI, the power of the papers, and tech transfer at the same time."}, {"start": 307.32, "end": 314.12, "text": " And since Nvidia has an excellent record in putting these tools into the hands of all of us,"}, {"start": 314.12, "end": 319.64, "text": " for instance, they did it with the previous incarnation of this technique, nano VDB."}, {"start": 319.64, "end": 325.48, "text": " I am convinced that we are all going to be able to enjoy this amazing paper soon."}, {"start": 325.48, "end": 327.4, "text": " How cool is that?"}, {"start": 327.4, "end": 331.56, "text": " So, does this get your mind going? What would you use this for?"}, {"start": 331.56, "end": 333.24, "text": " Let me know in the comments below."}, {"start": 333.24, "end": 343.64, "text": " If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute."}, {"start": 343.64, "end": 346.44, "text": " No commitments or negotiation required."}, {"start": 346.44, "end": 349.47999999999996, "text": " Just sign up and launch an instance."}, {"start": 349.47999999999996, "end": 363.96, "text": " And hold on to your papers because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 363.96, "end": 366.76, "text": " That's 73% savings."}, {"start": 366.76, "end": 376.12, "text": " Did I mention they also offer persistent storage, so join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 376.12, "end": 380.68, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 380.68, "end": 388.59999999999997, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 388.59999999999997, "end": 390.84, "text": " Thanks for watching and for your generous support."}, {"start": 390.84, "end": 399.96, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lj8qofm4n4o
NVIDIA’s AI: Amazing DeepFakes And Virtual Avatars!
❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" is available here: https://nvlabs.github.io/face-vid2vid/ Try it out: http://imaginaire.cc/vid2vid-cameo/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia #deepfake
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karojona Ifehir. Today we are going to see that the research papers that you see here in this series are real. Here you see Nvidia's Game Changing Video Conferencing AI. So, what does this do? Why is this so interesting? How does this transfer a video of us over the internet? Well, here is a crazy idea. It doesn't. What? Transmitting video without transmitting video, how is that even possible? Well, now it is possible. What they do in this work is take only the first image from the video and they throw away the entire video afterwards. But before that, it stores a tiny bit of information from it which is how our head is moving over time and how our expressions change. That is an absolutely outrageous idea except the fact that it works. And it not only works, but it works really well. And because this is an amazing paper, it does not stop there. It can do even more. Look at these two previous methods trying to frontalize the input video. This means that we look to the side a little and the algorithm synthesizes a new image of us as if the camera was right in front of us. That sounds like a science fiction movie except that it seems absolutely impossible given how much these techniques are struggling with the task until we look at the new method. My goodness. There is some jumpiness in the neck movement in the output video here and some warping issues here but otherwise very impressive results. Now if you have been holding onto your paper so far, squeeze that paper because these previous methods are not some ancient papers that were published a long time ago, not at all. Both of them were published within the same year as the new paper. How amazing is that? Wow! And it could also perform deepfakes too. Look, we only need one image of the target person and we can transfer all of our gestures to them in a way that is significantly better than most previous methods. Now of course not even this technique was perfect, it still struggled a great deal in the presence of a Cluder object but still just the fact that this is now possible feels like we are living in the future. What a time to be alive! Now I said that today we are going to see that this paper is as real as it gets. So what does that mean? Well today anyone can try this technique. Yes, just one year after publishing this paper and it is now available as a demo. The link to it is available in the video description below. And get this, some people at Nvidia are already using it for their virtual meetings. And by it I mean both the compression engine where the previous industry standard compression algorithm could do this one given very little data. That is not much, but the new technique with the same amount of data can now do this. That is insanity, loving it. And they also use their own gestures to create these deepfakes and make virtual characters come alive or to frontalize their videos when talking to each other. It almost feels like we are living in a science fiction movie. And all this is out there for us to use. Also these technologies will soon be part of the Nvidia Video Codec SDK as the AI Face Codec which means that it will be soon deployed to an even wider audience to use. These companies are already using it. So this is one more amazing example that shows that the papers that you see here in two minute papers are real. Sometimes so real that we can go from a research paper to a product in just a year. That is absolutely miraculous. So what do you think? What would you use this for? Let me know in the comments below. This video has been supported by weights and biases. Look at this, they have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more. Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karojona Ifehir."}, {"start": 4.76, "end": 11.040000000000001, "text": " Today we are going to see that the research papers that you see here in this series are real."}, {"start": 11.040000000000001, "end": 15.84, "text": " Here you see Nvidia's Game Changing Video Conferencing AI."}, {"start": 15.84, "end": 17.88, "text": " So, what does this do?"}, {"start": 17.88, "end": 20.400000000000002, "text": " Why is this so interesting?"}, {"start": 20.400000000000002, "end": 24.6, "text": " How does this transfer a video of us over the internet?"}, {"start": 24.6, "end": 27.52, "text": " Well, here is a crazy idea."}, {"start": 27.52, "end": 28.76, "text": " It doesn't."}, {"start": 28.76, "end": 29.76, "text": " What?"}, {"start": 29.76, "end": 35.32, "text": " Transmitting video without transmitting video, how is that even possible?"}, {"start": 35.32, "end": 38.52, "text": " Well, now it is possible."}, {"start": 38.52, "end": 45.120000000000005, "text": " What they do in this work is take only the first image from the video and they throw away"}, {"start": 45.120000000000005, "end": 47.52, "text": " the entire video afterwards."}, {"start": 47.52, "end": 54.84, "text": " But before that, it stores a tiny bit of information from it which is how our head is moving"}, {"start": 54.84, "end": 59.120000000000005, "text": " over time and how our expressions change."}, {"start": 59.120000000000005, "end": 64.72, "text": " That is an absolutely outrageous idea except the fact that it works."}, {"start": 64.72, "end": 68.76, "text": " And it not only works, but it works really well."}, {"start": 68.76, "end": 72.72, "text": " And because this is an amazing paper, it does not stop there."}, {"start": 72.72, "end": 74.72, "text": " It can do even more."}, {"start": 74.72, "end": 80.32000000000001, "text": " Look at these two previous methods trying to frontalize the input video."}, {"start": 80.32, "end": 86.32, "text": " This means that we look to the side a little and the algorithm synthesizes a new image of"}, {"start": 86.32, "end": 90.75999999999999, "text": " us as if the camera was right in front of us."}, {"start": 90.75999999999999, "end": 97.32, "text": " That sounds like a science fiction movie except that it seems absolutely impossible given"}, {"start": 97.32, "end": 103.39999999999999, "text": " how much these techniques are struggling with the task until we look at the new method."}, {"start": 103.39999999999999, "end": 105.32, "text": " My goodness."}, {"start": 105.32, "end": 111.0, "text": " There is some jumpiness in the neck movement in the output video here and some warping"}, {"start": 111.0, "end": 115.6, "text": " issues here but otherwise very impressive results."}, {"start": 115.6, "end": 121.44, "text": " Now if you have been holding onto your paper so far, squeeze that paper because these previous"}, {"start": 121.44, "end": 127.83999999999999, "text": " methods are not some ancient papers that were published a long time ago, not at all."}, {"start": 127.83999999999999, "end": 132.88, "text": " Both of them were published within the same year as the new paper."}, {"start": 132.88, "end": 134.88, "text": " How amazing is that?"}, {"start": 134.88, "end": 135.88, "text": " Wow!"}, {"start": 135.88, "end": 139.32, "text": " And it could also perform deepfakes too."}, {"start": 139.32, "end": 146.16, "text": " Look, we only need one image of the target person and we can transfer all of our gestures"}, {"start": 146.16, "end": 151.84, "text": " to them in a way that is significantly better than most previous methods."}, {"start": 151.84, "end": 157.51999999999998, "text": " Now of course not even this technique was perfect, it still struggled a great deal in the"}, {"start": 157.51999999999998, "end": 163.6, "text": " presence of a Cluder object but still just the fact that this is now possible feels like"}, {"start": 163.6, "end": 166.04, "text": " we are living in the future."}, {"start": 166.04, "end": 168.04, "text": " What a time to be alive!"}, {"start": 168.04, "end": 175.2, "text": " Now I said that today we are going to see that this paper is as real as it gets."}, {"start": 175.2, "end": 177.2, "text": " So what does that mean?"}, {"start": 177.2, "end": 180.88, "text": " Well today anyone can try this technique."}, {"start": 180.88, "end": 188.12, "text": " Yes, just one year after publishing this paper and it is now available as a demo."}, {"start": 188.12, "end": 192.04, "text": " The link to it is available in the video description below."}, {"start": 192.04, "end": 198.72, "text": " And get this, some people at Nvidia are already using it for their virtual meetings."}, {"start": 198.72, "end": 205.0, "text": " And by it I mean both the compression engine where the previous industry standard compression"}, {"start": 205.0, "end": 210.23999999999998, "text": " algorithm could do this one given very little data."}, {"start": 210.23999999999998, "end": 217.48, "text": " That is not much, but the new technique with the same amount of data can now do this."}, {"start": 217.48, "end": 220.6, "text": " That is insanity, loving it."}, {"start": 220.6, "end": 227.16, "text": " And they also use their own gestures to create these deepfakes and make virtual characters"}, {"start": 227.16, "end": 232.51999999999998, "text": " come alive or to frontalize their videos when talking to each other."}, {"start": 232.51999999999998, "end": 236.88, "text": " It almost feels like we are living in a science fiction movie."}, {"start": 236.88, "end": 240.07999999999998, "text": " And all this is out there for us to use."}, {"start": 240.07999999999998, "end": 247.51999999999998, "text": " Also these technologies will soon be part of the Nvidia Video Codec SDK as the AI Face"}, {"start": 247.52, "end": 253.68, "text": " Codec which means that it will be soon deployed to an even wider audience to use."}, {"start": 253.68, "end": 256.48, "text": " These companies are already using it."}, {"start": 256.48, "end": 262.64, "text": " So this is one more amazing example that shows that the papers that you see here in two"}, {"start": 262.64, "end": 265.24, "text": " minute papers are real."}, {"start": 265.24, "end": 271.44, "text": " Sometimes so real that we can go from a research paper to a product in just a year."}, {"start": 271.44, "end": 274.52, "text": " That is absolutely miraculous."}, {"start": 274.52, "end": 278.24, "text": " So what do you think? What would you use this for?"}, {"start": 278.24, "end": 280.03999999999996, "text": " Let me know in the comments below."}, {"start": 280.03999999999996, "end": 283.76, "text": " This video has been supported by weights and biases."}, {"start": 283.76, "end": 289.24, "text": " Look at this, they have a great community forum that aims to make you the best machine"}, {"start": 289.24, "end": 291.35999999999996, "text": " learning engineer you can be."}, {"start": 291.35999999999996, "end": 296.24, "text": " You see, I always get messages from you fellow scholars telling me that you have been"}, {"start": 296.24, "end": 301.47999999999996, "text": " inspired by the series, but don't really know where to start."}, {"start": 301.47999999999996, "end": 302.79999999999995, "text": " And here it is."}, {"start": 302.8, "end": 308.48, "text": " In this forum, you can share your projects, ask for advice, look for collaborators and"}, {"start": 308.48, "end": 309.48, "text": " more."}, {"start": 309.48, "end": 317.56, "text": " Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video"}, {"start": 317.56, "end": 318.56, "text": " description."}, {"start": 318.56, "end": 323.72, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 323.72, "end": 325.0, "text": " better videos for you."}, {"start": 325.0, "end": 335.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=G7gdOPEd6mU
DeepMind’s AlphaFold: 200 Gifts To Humanity! 🧬
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Highly accurate protein structure prediction with #AlphaFold" is available here: https://www.deepmind.com/blog/alphafold-reveals-the-structure-of-the-protein-universe https://www.nature.com/articles/s41586-021-03819-2 https://www.deepmind.com/blog/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology Database: https://www.uniprot.org/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jolaifahir. Today you will see history in the making. We are going to have a look at Alpha Fold from Deep Mind, perhaps one of the most important papers of the last few years. This is a true gift to humanity. And if even that is not good enough, it gets better because today you will see how this gift has now become 200 times bigger. So, what is Alpha Fold? Alpha Fold is an AI that is capable of solving protein structure prediction which we were referred to as protein folding. Okay, but what is a protein and why does it need folding? A protein is a string of amino acids, these are the building blocks of life. This is what goes in, which in reality has a 3D structure. And that is protein folding. Letters go in, the 3D object comes out. This is hard, really hard. For instance, Deep Mind's earlier AI programs were able to play hard games like chess, go and Starcraft 2, and none of these AI's were a match to protein folding. Yet, Alpha Fold has become the world's best solution at the Casp event. I've heard Deep Mind CEO, Demis Ashabis, call it the Olympics of protein folding. If you look at how teams of scientists prepare for this event, you will probably agree that yes, this is indeed the Olympics of protein folding. And what is even more incredible is that it has reached a score of 90, which means two amazing things. One, in absolute terms, Alpha Fold 2 is considered to be about 3 times better than previous solutions. The score of 90 means that we can consider protein folding as a mostly solved problem. Wow, this is not something that I thought I would be able to say in my lifetime. Now, we said that this is a gift to humanity. Why is that? Where is the gift part here? What makes this a gift? Well, a little after publishing the paper, Deep Mind made these 3D Structure Predictions available for free for everyone. For instance, they have made their human protein predictions public. Beyond that, they also made their predictions public for yeast, important pathogens, crop species, and more. I also said that Alpha Fold 2 is not the end, but the start of something great. And I promise to tell you how it improves over time, so here are two incredible revelations. One, we noted that Alpha Fold will likely help us fight diseases and develop new vaccines. And hold onto your papers, because this is not just talk, this is now a reality today. Look, this group was already able to accelerate the research work a great deal due to Alpha Fold in fighting antibiotic resistant bacteria. They are studying how to fight deadly bacteria that modify their own membrane structure so that the antibiotic can't get in. That is a hugely important project for all of us. They noted that they have been working vigorously on this project for 10 years. And now that they have access to Alpha Fold, they could get a prediction done in 30 minutes. And this is something that was not possible to do for 10 years before that. That is absolutely incredible. This is history in the making. I love to be able to live through times like this. So, thank you so much for sharing this amazing moment with me. So this was one project, and what is this here? Oh yes, these are some more of these Alpha Fold predictions that were already referenced from other scientific research papers. This is making big waves. And two, this gift of humanity has now become bigger. So much bigger. Initially, the protein prediction database they published contained 1 million entries. That is a lot of proteins. And now, if you have been holding onto your papers, squeeze that paper because they have grown it from 1 million to 200 million structures. That is 200x within less than a year. Wow! So, just imagine how many more of these amazing medicine research projects it will be able to accelerate. I cannot even fathom how many lives this AR project is going to save in the future. What a time to be alive! Here is the distribution of the data across different categories, and the small circles represent what was available before, and the big ones show what is available from now on. Absolutely amazing. It seems clearer and clearer that we are now entering the age of AI-driven art creation, but it might be possible that we will also soon enter the age of AI-driven medicine. These tools can help us so much. How cool is that? So, does this get your mind going? What would you use this for? Let me know in the comments below. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API, so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text, or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers, or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jolaifahir."}, {"start": 4.76, "end": 8.4, "text": " Today you will see history in the making."}, {"start": 8.4, "end": 16.4, "text": " We are going to have a look at Alpha Fold from Deep Mind, perhaps one of the most important papers of the last few years."}, {"start": 16.4, "end": 19.2, "text": " This is a true gift to humanity."}, {"start": 19.2, "end": 29.900000000000002, "text": " And if even that is not good enough, it gets better because today you will see how this gift has now become 200 times bigger."}, {"start": 29.9, "end": 32.62, "text": " So, what is Alpha Fold?"}, {"start": 32.62, "end": 41.1, "text": " Alpha Fold is an AI that is capable of solving protein structure prediction which we were referred to as protein folding."}, {"start": 41.1, "end": 46.7, "text": " Okay, but what is a protein and why does it need folding?"}, {"start": 46.7, "end": 52.5, "text": " A protein is a string of amino acids, these are the building blocks of life."}, {"start": 52.5, "end": 58.2, "text": " This is what goes in, which in reality has a 3D structure."}, {"start": 58.2, "end": 60.6, "text": " And that is protein folding."}, {"start": 60.6, "end": 64.4, "text": " Letters go in, the 3D object comes out."}, {"start": 64.4, "end": 67.3, "text": " This is hard, really hard."}, {"start": 67.3, "end": 75.5, "text": " For instance, Deep Mind's earlier AI programs were able to play hard games like chess, go and Starcraft 2,"}, {"start": 75.5, "end": 80.10000000000001, "text": " and none of these AI's were a match to protein folding."}, {"start": 80.10000000000001, "end": 85.60000000000001, "text": " Yet, Alpha Fold has become the world's best solution at the Casp event."}, {"start": 85.6, "end": 91.8, "text": " I've heard Deep Mind CEO, Demis Ashabis, call it the Olympics of protein folding."}, {"start": 91.8, "end": 101.5, "text": " If you look at how teams of scientists prepare for this event, you will probably agree that yes, this is indeed the Olympics of protein folding."}, {"start": 101.5, "end": 110.3, "text": " And what is even more incredible is that it has reached a score of 90, which means two amazing things."}, {"start": 110.3, "end": 118.7, "text": " One, in absolute terms, Alpha Fold 2 is considered to be about 3 times better than previous solutions."}, {"start": 118.7, "end": 126.2, "text": " The score of 90 means that we can consider protein folding as a mostly solved problem."}, {"start": 126.2, "end": 132.7, "text": " Wow, this is not something that I thought I would be able to say in my lifetime."}, {"start": 132.7, "end": 136.5, "text": " Now, we said that this is a gift to humanity."}, {"start": 136.5, "end": 139.9, "text": " Why is that? Where is the gift part here?"}, {"start": 139.9, "end": 141.70000000000002, "text": " What makes this a gift?"}, {"start": 141.70000000000002, "end": 150.9, "text": " Well, a little after publishing the paper, Deep Mind made these 3D Structure Predictions available for free for everyone."}, {"start": 150.9, "end": 155.1, "text": " For instance, they have made their human protein predictions public."}, {"start": 155.1, "end": 163.1, "text": " Beyond that, they also made their predictions public for yeast, important pathogens, crop species, and more."}, {"start": 163.1, "end": 169.3, "text": " I also said that Alpha Fold 2 is not the end, but the start of something great."}, {"start": 169.3, "end": 176.5, "text": " And I promise to tell you how it improves over time, so here are two incredible revelations."}, {"start": 176.5, "end": 183.5, "text": " One, we noted that Alpha Fold will likely help us fight diseases and develop new vaccines."}, {"start": 183.5, "end": 190.5, "text": " And hold onto your papers, because this is not just talk, this is now a reality today."}, {"start": 190.5, "end": 197.70000000000002, "text": " Look, this group was already able to accelerate the research work a great deal due to Alpha Fold"}, {"start": 197.7, "end": 201.1, "text": " in fighting antibiotic resistant bacteria."}, {"start": 201.1, "end": 207.29999999999998, "text": " They are studying how to fight deadly bacteria that modify their own membrane structure"}, {"start": 207.29999999999998, "end": 210.1, "text": " so that the antibiotic can't get in."}, {"start": 210.1, "end": 213.7, "text": " That is a hugely important project for all of us."}, {"start": 213.7, "end": 218.89999999999998, "text": " They noted that they have been working vigorously on this project for 10 years."}, {"start": 218.89999999999998, "end": 225.29999999999998, "text": " And now that they have access to Alpha Fold, they could get a prediction done in 30 minutes."}, {"start": 225.3, "end": 231.10000000000002, "text": " And this is something that was not possible to do for 10 years before that."}, {"start": 231.10000000000002, "end": 233.9, "text": " That is absolutely incredible."}, {"start": 233.9, "end": 236.9, "text": " This is history in the making."}, {"start": 236.9, "end": 240.70000000000002, "text": " I love to be able to live through times like this."}, {"start": 240.70000000000002, "end": 245.3, "text": " So, thank you so much for sharing this amazing moment with me."}, {"start": 245.3, "end": 249.9, "text": " So this was one project, and what is this here?"}, {"start": 249.9, "end": 255.70000000000002, "text": " Oh yes, these are some more of these Alpha Fold predictions that were already referenced"}, {"start": 255.70000000000002, "end": 258.5, "text": " from other scientific research papers."}, {"start": 258.5, "end": 260.9, "text": " This is making big waves."}, {"start": 260.9, "end": 265.7, "text": " And two, this gift of humanity has now become bigger."}, {"start": 265.7, "end": 267.1, "text": " So much bigger."}, {"start": 267.1, "end": 272.7, "text": " Initially, the protein prediction database they published contained 1 million entries."}, {"start": 272.7, "end": 274.7, "text": " That is a lot of proteins."}, {"start": 274.7, "end": 284.7, "text": " And now, if you have been holding onto your papers, squeeze that paper because they have grown it from 1 million to 200 million structures."}, {"start": 284.7, "end": 288.7, "text": " That is 200x within less than a year."}, {"start": 288.7, "end": 289.7, "text": " Wow!"}, {"start": 289.7, "end": 297.09999999999997, "text": " So, just imagine how many more of these amazing medicine research projects it will be able to accelerate."}, {"start": 297.09999999999997, "end": 303.7, "text": " I cannot even fathom how many lives this AR project is going to save in the future."}, {"start": 303.7, "end": 305.7, "text": " What a time to be alive!"}, {"start": 305.7, "end": 314.7, "text": " Here is the distribution of the data across different categories, and the small circles represent what was available before,"}, {"start": 314.7, "end": 319.7, "text": " and the big ones show what is available from now on."}, {"start": 319.7, "end": 321.5, "text": " Absolutely amazing."}, {"start": 321.5, "end": 328.09999999999997, "text": " It seems clearer and clearer that we are now entering the age of AI-driven art creation,"}, {"start": 328.1, "end": 334.70000000000005, "text": " but it might be possible that we will also soon enter the age of AI-driven medicine."}, {"start": 334.70000000000005, "end": 337.70000000000005, "text": " These tools can help us so much."}, {"start": 337.70000000000005, "end": 339.3, "text": " How cool is that?"}, {"start": 339.3, "end": 341.5, "text": " So, does this get your mind going?"}, {"start": 341.5, "end": 343.5, "text": " What would you use this for?"}, {"start": 343.5, "end": 345.3, "text": " Let me know in the comments below."}, {"start": 345.3, "end": 348.70000000000005, "text": " This episode has been supported by CoHear AI."}, {"start": 348.70000000000005, "end": 353.90000000000003, "text": " CoHear builds large language models and makes them available through an API,"}, {"start": 353.9, "end": 362.5, "text": " so businesses can add advanced language understanding to their system or app quickly with just one line of code."}, {"start": 362.5, "end": 368.5, "text": " You can use your own data, whether it's text from customer service requests, legal contracts,"}, {"start": 368.5, "end": 376.29999999999995, "text": " or social media posts to create your own custom models to understand text, or even generated."}, {"start": 376.3, "end": 385.7, "text": " For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping,"}, {"start": 385.7, "end": 392.3, "text": " or it can be used to generate a list of possible sentences you can use for your product descriptions."}, {"start": 392.3, "end": 400.7, "text": " Make sure to go to CoHear.ai slash papers, or click the link in the video description and give it a try today."}, {"start": 400.7, "end": 402.3, "text": " It's super easy to use."}, {"start": 402.3, "end": 406.7, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_Y1-KlTEmwk
Google’s New AI: Fly INTO Photos! 🐦
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper "Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image" is available here: https://infinite-nature.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1761292/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos John Fahir. Today, we are able to take a bunch of photos and use an AI to magically create a video where we can fly through these photos. It is really crazy because this is possible today. For instance, here is Nvidia's method that can be trained to perform this in a matter of seconds. Now, I said that in these we can fly through these photos, but here is an insane idea. What if we used not multiple photos, but just one photo, and we don't fly through it, but fly into this photo. Now you are probably asking, Karoi, what are you talking about? This is completely insane and it wouldn't work with these nerve-based solutions like the one you see here. These were not designed to do this at all. Look, oh yes, that. So in order to fly into these photos, we would have to invent at least three things. One is image in painting. If we are to fly into this photo, we will have to be able to look at regions between the trees. Unfortunately, these are not part of the original photo and hence, new content needs to be generated intelligently. That is a formidable task for an AI and luckily image in painting techniques already exist out there. Here is one. But in painting is not nearly enough. Two, as we fly into a photo, completely new regions should also appear that are beyond the image. This means that we also need to perform image out painting, creating these new regions, continuing the image, if you will. Luckily, we are entering the age of AI-driven image generation and this is also possible today, for instance, with this incredible tool. But even that is not enough. Why is that? Well, three, as we fly closer to these regions, we will be looking at fewer and fewer pixels and from closer and closer, which means this. Oh my, another problem. Surely, we can't solve this, right? Well, great news we can. Here is Google's diffusion-based solution to super-resolution, where the principle is simple. Have a look at this technique from last year, in goes a course image or video, and this AI-based method is tasked with this. Yes, this is not science fiction, this is super-resolution, where the AI starts out from noise and synthesizes crisp details onto the image. So this might not be such an insane idea after all. But, does the fact that we can do all three of these separately mean that this task is easy? Well, let's see how previous techniques were able to tackle this challenge. My guess is that this is still sinfully difficult to do. And oh boy, well, I see a lot of glitches and not a lot of new, meaningful content being synthesized here. And note that these are not some ancient techniques, these are all from just two years ago. It really seems that there is not a lot of hope here. But don't despair, and now, hold onto your papers, and let's see how Google's new AI puts all of these together and lets us fly into this photo. Wow, this is so much better. I love it. Not really, not perfect, but I feel that this is the first work where the flying into photos concept really comes into life. And it has a bunch of really cool features too. For instance, one, it can generate even longer videos, which means that after a few seconds everything that we see is synthesized by the AI. Two, it supports not only this boring linear camera motion, but these really cool, curvy camera trajectories too. Putting these two features together, we can get these cool animations that were not possible before this paper. Now the flaws are clearly visible for everyone, but this is a historic episode where we can invoke the three laws of papers to address them. The first law of papers says that research is a process. So not look at where we are, look at where we will be two more papers down the line. With this concept, we are roughly where Dolly 1 was about a year ago. That is an image generator AI that could produce images of this quality. And just one year later Dolly 2 arrived, which could do this. So just imagine what kind of videos this will be able to create just one more paper down the line. The second law of papers says that everything is connected. This AI technique is able to learn image in painting, image outpainting and super resolution at the same time and even combine them creatively. We don't need three separate AI's to do this anymore, just one technique. That is very impressive. And the third law of papers says that a bad researcher fails 100% of the time while a good one only fails 99% of the time. Hence what you see here is always just 1% of the work that was done. Why is that? Well for instance this is a neural network based solution which means that we need a ton of training data for these AI's to learn on. And hence scientists at Google also needed to create a technique together a ton of drone videos on the internet and create a clean data set also with labelling as well. The labels are essentially depth information which shows how far different parts of the image are from the camera. And they did it for more than 10 million images in total. So once again if you include all the versions of this idea that didn't work what you see here is just 1% of the work that was done. And now we can not only fly through these photos but also fly into photos. What a time to be alive. So what do you think? Does this get your mind going? What would you use this for? Let me know in the comments below. This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and of course creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning and even hyperparameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung and many more prestigious labs. Make sure to use the link wnb.me slash paper intro or just click the link in the video description and try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it you won't want to go back. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos John Fahir."}, {"start": 4.64, "end": 12.88, "text": " Today, we are able to take a bunch of photos and use an AI to magically create a video where"}, {"start": 12.88, "end": 15.72, "text": " we can fly through these photos."}, {"start": 15.72, "end": 19.48, "text": " It is really crazy because this is possible today."}, {"start": 19.48, "end": 25.2, "text": " For instance, here is Nvidia's method that can be trained to perform this in a matter"}, {"start": 25.2, "end": 26.2, "text": " of seconds."}, {"start": 26.2, "end": 33.96, "text": " Now, I said that in these we can fly through these photos, but here is an insane idea."}, {"start": 33.96, "end": 41.12, "text": " What if we used not multiple photos, but just one photo, and we don't fly through it,"}, {"start": 41.12, "end": 44.4, "text": " but fly into this photo."}, {"start": 44.4, "end": 49.2, "text": " Now you are probably asking, Karoi, what are you talking about?"}, {"start": 49.2, "end": 54.760000000000005, "text": " This is completely insane and it wouldn't work with these nerve-based solutions like the"}, {"start": 54.76, "end": 56.68, "text": " one you see here."}, {"start": 56.68, "end": 59.48, "text": " These were not designed to do this at all."}, {"start": 59.48, "end": 63.519999999999996, "text": " Look, oh yes, that."}, {"start": 63.519999999999996, "end": 70.64, "text": " So in order to fly into these photos, we would have to invent at least three things."}, {"start": 70.64, "end": 72.92, "text": " One is image in painting."}, {"start": 72.92, "end": 78.92, "text": " If we are to fly into this photo, we will have to be able to look at regions between"}, {"start": 78.92, "end": 79.92, "text": " the trees."}, {"start": 79.92, "end": 87.6, "text": " Unfortunately, these are not part of the original photo and hence, new content needs to be generated"}, {"start": 87.6, "end": 88.68, "text": " intelligently."}, {"start": 88.68, "end": 95.92, "text": " That is a formidable task for an AI and luckily image in painting techniques already exist"}, {"start": 95.92, "end": 96.92, "text": " out there."}, {"start": 96.92, "end": 98.2, "text": " Here is one."}, {"start": 98.2, "end": 101.52000000000001, "text": " But in painting is not nearly enough."}, {"start": 101.52000000000001, "end": 108.96000000000001, "text": " Two, as we fly into a photo, completely new regions should also appear that are beyond"}, {"start": 108.96, "end": 110.08, "text": " the image."}, {"start": 110.08, "end": 116.36, "text": " This means that we also need to perform image out painting, creating these new regions,"}, {"start": 116.36, "end": 118.47999999999999, "text": " continuing the image, if you will."}, {"start": 118.47999999999999, "end": 124.83999999999999, "text": " Luckily, we are entering the age of AI-driven image generation and this is also possible"}, {"start": 124.83999999999999, "end": 129.0, "text": " today, for instance, with this incredible tool."}, {"start": 129.0, "end": 131.88, "text": " But even that is not enough."}, {"start": 131.88, "end": 133.2, "text": " Why is that?"}, {"start": 133.2, "end": 140.0, "text": " Well, three, as we fly closer to these regions, we will be looking at fewer and fewer pixels"}, {"start": 140.0, "end": 143.76, "text": " and from closer and closer, which means this."}, {"start": 143.76, "end": 146.35999999999999, "text": " Oh my, another problem."}, {"start": 146.35999999999999, "end": 148.79999999999998, "text": " Surely, we can't solve this, right?"}, {"start": 148.79999999999998, "end": 151.48, "text": " Well, great news we can."}, {"start": 151.48, "end": 158.0, "text": " Here is Google's diffusion-based solution to super-resolution, where the principle is simple."}, {"start": 158.0, "end": 163.96, "text": " Have a look at this technique from last year, in goes a course image or video, and this"}, {"start": 163.96, "end": 167.24, "text": " AI-based method is tasked with this."}, {"start": 167.24, "end": 174.68, "text": " Yes, this is not science fiction, this is super-resolution, where the AI starts out from noise"}, {"start": 174.68, "end": 178.84, "text": " and synthesizes crisp details onto the image."}, {"start": 178.84, "end": 183.28, "text": " So this might not be such an insane idea after all."}, {"start": 183.28, "end": 189.72, "text": " But, does the fact that we can do all three of these separately mean that this task is"}, {"start": 189.72, "end": 190.72, "text": " easy?"}, {"start": 190.72, "end": 195.92000000000002, "text": " Well, let's see how previous techniques were able to tackle this challenge."}, {"start": 195.92000000000002, "end": 200.52, "text": " My guess is that this is still sinfully difficult to do."}, {"start": 200.52, "end": 207.12, "text": " And oh boy, well, I see a lot of glitches and not a lot of new, meaningful content being"}, {"start": 207.12, "end": 208.72, "text": " synthesized here."}, {"start": 208.72, "end": 215.64, "text": " And note that these are not some ancient techniques, these are all from just two years ago."}, {"start": 215.64, "end": 218.72, "text": " It really seems that there is not a lot of hope here."}, {"start": 218.72, "end": 224.68, "text": " But don't despair, and now, hold onto your papers, and let's see how Google's new AI"}, {"start": 224.68, "end": 229.52, "text": " puts all of these together and lets us fly into this photo."}, {"start": 229.52, "end": 233.0, "text": " Wow, this is so much better."}, {"start": 233.0, "end": 234.76, "text": " I love it."}, {"start": 234.76, "end": 240.22, "text": " Not really, not perfect, but I feel that this is the first work where the flying into"}, {"start": 240.22, "end": 244.07999999999998, "text": " photos concept really comes into life."}, {"start": 244.07999999999998, "end": 247.6, "text": " And it has a bunch of really cool features too."}, {"start": 247.6, "end": 254.56, "text": " For instance, one, it can generate even longer videos, which means that after a few seconds"}, {"start": 254.56, "end": 258.48, "text": " everything that we see is synthesized by the AI."}, {"start": 258.48, "end": 265.48, "text": " Two, it supports not only this boring linear camera motion, but these really cool, curvy"}, {"start": 265.48, "end": 267.56, "text": " camera trajectories too."}, {"start": 267.56, "end": 273.0, "text": " Putting these two features together, we can get these cool animations that were not possible"}, {"start": 273.0, "end": 274.6, "text": " before this paper."}, {"start": 274.6, "end": 280.40000000000003, "text": " Now the flaws are clearly visible for everyone, but this is a historic episode where we can"}, {"start": 280.40000000000003, "end": 283.84000000000003, "text": " invoke the three laws of papers to address them."}, {"start": 283.84000000000003, "end": 287.8, "text": " The first law of papers says that research is a process."}, {"start": 287.8, "end": 293.28000000000003, "text": " So not look at where we are, look at where we will be two more papers down the line."}, {"start": 293.28000000000003, "end": 299.36, "text": " With this concept, we are roughly where Dolly 1 was about a year ago."}, {"start": 299.36, "end": 304.88, "text": " That is an image generator AI that could produce images of this quality."}, {"start": 304.88, "end": 310.36, "text": " And just one year later Dolly 2 arrived, which could do this."}, {"start": 310.36, "end": 316.64, "text": " So just imagine what kind of videos this will be able to create just one more paper down"}, {"start": 316.64, "end": 317.64, "text": " the line."}, {"start": 317.64, "end": 322.0, "text": " The second law of papers says that everything is connected."}, {"start": 322.0, "end": 328.64, "text": " This AI technique is able to learn image in painting, image outpainting and super resolution"}, {"start": 328.64, "end": 333.32, "text": " at the same time and even combine them creatively."}, {"start": 333.32, "end": 338.71999999999997, "text": " We don't need three separate AI's to do this anymore, just one technique."}, {"start": 338.71999999999997, "end": 340.84, "text": " That is very impressive."}, {"start": 340.84, "end": 348.15999999999997, "text": " And the third law of papers says that a bad researcher fails 100% of the time while a good"}, {"start": 348.15999999999997, "end": 352.11999999999995, "text": " one only fails 99% of the time."}, {"start": 352.11999999999995, "end": 357.23999999999995, "text": " Hence what you see here is always just 1% of the work that was done."}, {"start": 357.23999999999995, "end": 358.4, "text": " Why is that?"}, {"start": 358.4, "end": 364.2, "text": " Well for instance this is a neural network based solution which means that we need a ton"}, {"start": 364.2, "end": 367.55999999999995, "text": " of training data for these AI's to learn on."}, {"start": 367.56, "end": 373.88, "text": " And hence scientists at Google also needed to create a technique together a ton of drone"}, {"start": 373.88, "end": 379.96, "text": " videos on the internet and create a clean data set also with labelling as well."}, {"start": 379.96, "end": 385.48, "text": " The labels are essentially depth information which shows how far different parts of the"}, {"start": 385.48, "end": 387.88, "text": " image are from the camera."}, {"start": 387.88, "end": 392.8, "text": " And they did it for more than 10 million images in total."}, {"start": 392.8, "end": 398.48, "text": " So once again if you include all the versions of this idea that didn't work what you see"}, {"start": 398.48, "end": 402.24, "text": " here is just 1% of the work that was done."}, {"start": 402.24, "end": 408.76, "text": " And now we can not only fly through these photos but also fly into photos."}, {"start": 408.76, "end": 410.52, "text": " What a time to be alive."}, {"start": 410.52, "end": 412.24, "text": " So what do you think?"}, {"start": 412.24, "end": 413.92, "text": " Does this get your mind going?"}, {"start": 413.92, "end": 415.76, "text": " What would you use this for?"}, {"start": 415.76, "end": 417.72, "text": " Let me know in the comments below."}, {"start": 417.72, "end": 421.2, "text": " This video has been supported by weights and biases."}, {"start": 421.2, "end": 427.15999999999997, "text": " Being a machine learning researcher means doing tons of experiments and of course creating"}, {"start": 427.15999999999997, "end": 428.96, "text": " tons of data."}, {"start": 428.96, "end": 433.52, "text": " But I am not looking for data, I am looking for insights."}, {"start": 433.52, "end": 436.68, "text": " And weights and biases helps with exactly that."}, {"start": 436.68, "end": 442.84, "text": " They have tools for experiment tracking, data set and model versioning and even hyperparameter"}, {"start": 442.84, "end": 444.56, "text": " optimization."}, {"start": 444.56, "end": 451.08, "text": " No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung"}, {"start": 451.08, "end": 453.76, "text": " and many more prestigious labs."}, {"start": 453.76, "end": 461.96, "text": " Make sure to use the link wnb.me slash paper intro or just click the link in the video description"}, {"start": 461.96, "end": 467.84, "text": " and try this 10 minute example of weights and biases today to experience the wonderful"}, {"start": 467.84, "end": 474.08, "text": " feeling of training a neural network and being in control of your experiments."}, {"start": 474.08, "end": 476.36, "text": " After you try it you won't want to go back."}, {"start": 476.36, "end": 480.32, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=nVhmFski3vg
Stable Diffusion: DALL-E 2 For Free, For Everyone!
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "High-Resolution Image Synthesis with Latent Diffusion Models" is available here: https://ommer-lab.com/research/latent-diffusion-models/ https://github.com/mallorbc/stable-diffusion-klms-gui ❗Try it here (we seem to have crashed it...again 😅, but sometimes it works, please be patient!): https://huggingface.co/spaces/stabilityai/stable-diffusion ❗Or here: https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb Great notebooks to try: https://www.reddit.com/r/StableDiffusion/comments/wzk78c/colab_notebook_sd_hiki_by_daswerq123_has_a/ https://github.com/pinilpypinilpy/sd-webui-colab-simplified https://github.com/victordibia/peacasso Run it on your own graphics card: https://github.com/CompVis/stable-diffusion Guide on how to run it at home: https://www.assemblyai.com/blog/how-to-run-stable-diffusion-locally-to-generate-images/ Image to image translation: https://twitter.com/AnjneyMidha/status/1564290733917360128 Turn your drawings into images: https://huggingface.co/spaces/huggingface/diffuse-the-rest Run it on an M1 Mac - https://replicate.com/blog/run-stable-diffusion-on-m1-mac Even more resources: https://multimodal.art/news/1-week-of-stable-diffusion Dreaming: https://twitter.com/_nateraw/status/1560320480816545801 ❗Interpolation: https://twitter.com/xsteenbrugge/status/1558508866463219712 ❗Full video of interpolation: https://www.youtube.com/watch?v=Bo3VZCjDhGI Interpolation (other): https://replicate.com/andreasjansson/stable-diffusion-animation Portrait interpolation: https://twitter.com/motionphi/status/1565377550401998848 Felícia Zsolnai-Fehér's works (the chimp drawing): http://felicia.hu Fantasy examples: https://twitter.com/DiffusionPics/status/1562172051171016706/photo/1 https://twitter.com/DiffusionPics/status/1562012909470748674/photo/1 https://twitter.com/DiffusionPics/status/1562625020790116352/photo/1 https://twitter.com/DiffusionPics/status/1562172051171016706/photo/1 https://twitter.com/Jason76066945/status/1560568161203736577/photo/1 https://twitter.com/DiffusionPics/status/1561839825786904577/photo/1 Collage: https://twitter.com/genekogan/status/1555184488606564353 Fantasy again: https://twitter.com/raphaelmilliere/status/1562480868186521601 Animation: https://twitter.com/CoffeeVectors/status/1558655441059594240 Variant generation: https://twitter.com/Buntworthy/status/1561703483316781057 Random noise walks: https://twitter.com/karpathy/status/1559343616270557184 Portrait interpolation: https://twitter.com/xsteenbrugge/status/1557018356099710979 Fantasy concept art montage: https://www.reddit.com/r/StableDiffusion/comments/wz2zx5/i_tried_to_make_some_fantasy_concept_art_with/ Scholars holding on to their papers: https://twitter.com/Merzmensch/status/1567219588538011650 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail source images: Anjney Midha Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 Teaser 0:22 The Age of AI Image Generation 1:06 But there is a problem 1:22 Stable Diffusion to the rescue! 2:11 1 - Dreaming 2:54 2 - Interpolation 3:30 3 - Fantasy 4:00 4 - Collage 4:45 5 - More fantasy 4:51 6 - Random noise walks 5:33 7 - Animations 6:00 8 - Portraits + interpolation 6:22 9 - Variant generation 6:39 10 - Montage 7:05 Good news! 7:29 Try it here! 8:04 The Age of Free and Open AI Image Generation! 8:19 The First Law of Papers 9:00 Stable Diffusion 👌 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to look at images and even videos that were created by an AI and honestly. I cannot believe how good some of these are. And is it possible that you can run this AI at home yourself? You'll find out today. This year, through OpenAI's Dolly2, we are entering the age of AI-driven image generation. Most of these techniques take a text prompt, which means that we can write whatever we wish to see on the screen. A noise pattern appears that slowly morphs into exactly what we are looking for. This is what we mean when we say that we are talking about diffusion-based models. Now, OpenAI's Dolly2 can create incredibly creative images and Google's Party and Imaging AI's are also at the very least as good. Sometimes they even win linguistic battles against OpenAI's solution. But there is a problem. All of them are missing something. And that is the model weights and the source code. This means that these are all closed solutions and we cannot pop the hood and look around inside. But now, here is a new solution called stable diffusion, where the model weights and the full source code are available. I cannot overstate how amazing this is. So, to demonstrate why this is so amazing, here are two reasons. Reason number one is that with this, we can finally take out our digital range and tinker with it. For instance, we can now adjust the internal parameters in a way that we cannot do with the closed solutions like Dolly2 and Imaging. So now, let's have a look together at 10 absolutely amazing examples of what it can do. And after that, I'll tell you about reason number two of how it gets even better. One, dreaming. Since the internal parameters are now exposed, we can add small changes to them and create a bunch of outputs that are similar. And then finally, stitch these images together as a video. This is so much better for exploring ideas. Just imagine that sometimes you get an image that is almost what you are looking for. But the framing or the shape of the doggie is not exactly perfect. Well, you won't need to throw out these almost perfect solutions anymore. Look at that! We can make the perfect good boy so much easier. I absolutely love it. Wow! Two interpolation. Now, hold on to your papers because we can even create a beautiful visual novel like this one by entering a bunch of prompts like the ones you see here. And we don't go from just one image to the next one in one jarring jump. But instead, these images can now be morphed into the next one creating these amazing transitions. By the way, the links to all these materials are available in the video description, including a link to the full version of the video that you see here. Three, its fantasy imagery are truly something else. Whether you are looking for landscapes, I was quite surprised by how competent stable diffusion is at creating those. These three houses are amazing too. But that's not when I fell off the chair. I fell off the chair when I saw these realistic fairy princesses. I did not expect it to be able to create such amazingly realistic humans. How cool is that? Four, we can also create a collage. Here, we can take a canvas, enter several prompts, and select a range for each of them. Now, the issue is that there is space between the images and there is another problem, even if there is no space between them, they won't blend into each other. No matter, stable diffusion can also perform image in painting, which means that we select a region, delete it, and it will be filled in with information based on its surroundings. And the results are spectacular. We don't get many separate images, we get one coherent image instead. Five, you know what? Let's look at a few more fantasy examples. Here are some of my favorites. Six, now these are diffusion based models, which means that they start out from a bunch of noise and slowly adjust the pixels of this image to resemble the input text prompts a little more. Hence, they are very sensitive to the initial noise patterns that we start out from. André Carpethy found an amazing way to take advantage of this property by adjusting the noise, but just a tiny bit and create many new, similar images. As this together, it results in a hypnotic video like this one. Random noise walks if you will. Loving it. It can generate not only images, but with a little additional work, even animations. Look, you are going to love this one. This was made by creating the same image with the eyes open and closed, and with the additional work of blending them together, it looks like this. Once again, the links to all of these works are available in the video description if you wish to have a closer look at the process. Eight, you remember that it can create fantastic portraits and it can interpolate between them. Now putting it all together, it can create portraits and interpolate between them, creating this sometimes smooth, sometimes a little jumpy videos. And don't forget, mine, variant generation is still possible. We can still give it an input image and since it understands what this image depicts, it can also repaint it in different variations. And finally, ten. The fact that these amazing images come out of stable diffusion does not mean that we have to use them in their entirety. If there is just one part of an image that we like, be it the night on a horse or the castle, that is more than enough. We can discard the rest of the image and just use the parts that we love best and make an awesome montage out of it. Now we discussed that we can pop the hood and tinker with this AI that was one of the amazing reasons behind these results, but I promised two reasons why this is so good. So what is reason number two? Is it possible that yes, yes, yes, this is the moment you have been waiting for. You can now try it yourself. If you are patient, you can engage in changing the internal parameters here and get some amazing variants. You might have to wait for a bit, but as of the making of this video, it works. Now what happens when you follow scholars get over there who really knows we have crashed plenty of websites before with our scholarly stampede. And if you don't want to wait or wish to run some more advanced experiments, you can run the model yourself at home on a consumer graphics card. And if you are unable to try it, don't despair. AI-based image generation is only getting cheaper and more democratized from here on out. So a little open source competition for open AI and Google. What a time to be alive. And please, as always, whatever you do, do not forget to apply the first law of papers which says that research is a process. And not look at where we are, look at where we will be two more papers down the line. Here are some results from Dolly 1 and just a year later Dolly 2 was capable of this. Just a year later. That is unbelievable. Just imagine what we will be able to do five years from now. If you have some ideas, make sure to leave a comment about that below. So finally, stable diffusion, a free and open source solution for AI-based image generation. Double thumbs up. This is something for everyone out there and it really shows the power of collaboration as us tinkerers around the world who work together to make something amazing. I love it. Thank you so much. And note that all this took about $600,000 to train. Now make no mistake. That is a lot of dollars. But this also means that creating an AI like this does not cost tens of millions of dollars anymore. And the team at Stability AI is already working on a smaller and cheaper model than this. So we are now entering not only the age of AI-based image generation, but the age of free and open AI-based image generation. Oh yes, and for now, let the experiments begin. If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold on to your papers because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 13.200000000000001, "text": " Today, we are going to look at images and even videos that were created by an AI and honestly."}, {"start": 13.200000000000001, "end": 16.8, "text": " I cannot believe how good some of these are."}, {"start": 16.8, "end": 21.6, "text": " And is it possible that you can run this AI at home yourself?"}, {"start": 21.6, "end": 23.400000000000002, "text": " You'll find out today."}, {"start": 23.4, "end": 30.799999999999997, "text": " This year, through OpenAI's Dolly2, we are entering the age of AI-driven image generation."}, {"start": 30.799999999999997, "end": 38.0, "text": " Most of these techniques take a text prompt, which means that we can write whatever we wish to see on the screen."}, {"start": 38.0, "end": 44.599999999999994, "text": " A noise pattern appears that slowly morphs into exactly what we are looking for."}, {"start": 44.599999999999994, "end": 50.2, "text": " This is what we mean when we say that we are talking about diffusion-based models."}, {"start": 50.2, "end": 61.6, "text": " Now, OpenAI's Dolly2 can create incredibly creative images and Google's Party and Imaging AI's are also at the very least as good."}, {"start": 61.6, "end": 67.0, "text": " Sometimes they even win linguistic battles against OpenAI's solution."}, {"start": 67.0, "end": 69.4, "text": " But there is a problem."}, {"start": 69.4, "end": 72.0, "text": " All of them are missing something."}, {"start": 72.0, "end": 75.60000000000001, "text": " And that is the model weights and the source code."}, {"start": 75.6, "end": 83.0, "text": " This means that these are all closed solutions and we cannot pop the hood and look around inside."}, {"start": 83.0, "end": 92.0, "text": " But now, here is a new solution called stable diffusion, where the model weights and the full source code are available."}, {"start": 92.0, "end": 95.6, "text": " I cannot overstate how amazing this is."}, {"start": 95.6, "end": 100.8, "text": " So, to demonstrate why this is so amazing, here are two reasons."}, {"start": 100.8, "end": 108.2, "text": " Reason number one is that with this, we can finally take out our digital range and tinker with it."}, {"start": 108.2, "end": 118.0, "text": " For instance, we can now adjust the internal parameters in a way that we cannot do with the closed solutions like Dolly2 and Imaging."}, {"start": 118.0, "end": 124.4, "text": " So now, let's have a look together at 10 absolutely amazing examples of what it can do."}, {"start": 124.4, "end": 130.4, "text": " And after that, I'll tell you about reason number two of how it gets even better."}, {"start": 130.4, "end": 132.20000000000002, "text": " One, dreaming."}, {"start": 132.20000000000002, "end": 141.8, "text": " Since the internal parameters are now exposed, we can add small changes to them and create a bunch of outputs that are similar."}, {"start": 141.8, "end": 146.6, "text": " And then finally, stitch these images together as a video."}, {"start": 146.6, "end": 150.20000000000002, "text": " This is so much better for exploring ideas."}, {"start": 150.20000000000002, "end": 155.8, "text": " Just imagine that sometimes you get an image that is almost what you are looking for."}, {"start": 155.8, "end": 161.60000000000002, "text": " But the framing or the shape of the doggie is not exactly perfect."}, {"start": 161.60000000000002, "end": 166.60000000000002, "text": " Well, you won't need to throw out these almost perfect solutions anymore."}, {"start": 166.60000000000002, "end": 168.20000000000002, "text": " Look at that!"}, {"start": 168.20000000000002, "end": 172.0, "text": " We can make the perfect good boy so much easier."}, {"start": 172.0, "end": 174.4, "text": " I absolutely love it."}, {"start": 174.4, "end": 175.4, "text": " Wow!"}, {"start": 175.4, "end": 177.4, "text": " Two interpolation."}, {"start": 177.4, "end": 184.60000000000002, "text": " Now, hold on to your papers because we can even create a beautiful visual novel like this one"}, {"start": 184.6, "end": 188.6, "text": " by entering a bunch of prompts like the ones you see here."}, {"start": 188.6, "end": 194.0, "text": " And we don't go from just one image to the next one in one jarring jump."}, {"start": 194.0, "end": 202.0, "text": " But instead, these images can now be morphed into the next one creating these amazing transitions."}, {"start": 202.0, "end": 207.2, "text": " By the way, the links to all these materials are available in the video description,"}, {"start": 207.2, "end": 211.4, "text": " including a link to the full version of the video that you see here."}, {"start": 211.4, "end": 216.0, "text": " Three, its fantasy imagery are truly something else."}, {"start": 216.0, "end": 221.32, "text": " Whether you are looking for landscapes, I was quite surprised by how competent stable"}, {"start": 221.32, "end": 224.16, "text": " diffusion is at creating those."}, {"start": 224.16, "end": 226.92000000000002, "text": " These three houses are amazing too."}, {"start": 226.92000000000002, "end": 229.20000000000002, "text": " But that's not when I fell off the chair."}, {"start": 229.20000000000002, "end": 234.20000000000002, "text": " I fell off the chair when I saw these realistic fairy princesses."}, {"start": 234.20000000000002, "end": 239.64000000000001, "text": " I did not expect it to be able to create such amazingly realistic humans."}, {"start": 239.64000000000001, "end": 241.16, "text": " How cool is that?"}, {"start": 241.16, "end": 244.56, "text": " Four, we can also create a collage."}, {"start": 244.56, "end": 251.56, "text": " Here, we can take a canvas, enter several prompts, and select a range for each of them."}, {"start": 251.56, "end": 258.15999999999997, "text": " Now, the issue is that there is space between the images and there is another problem,"}, {"start": 258.15999999999997, "end": 263.56, "text": " even if there is no space between them, they won't blend into each other."}, {"start": 263.56, "end": 269.86, "text": " No matter, stable diffusion can also perform image in painting, which means that we select"}, {"start": 269.86, "end": 277.24, "text": " a region, delete it, and it will be filled in with information based on its surroundings."}, {"start": 277.24, "end": 279.84000000000003, "text": " And the results are spectacular."}, {"start": 279.84000000000003, "end": 285.12, "text": " We don't get many separate images, we get one coherent image instead."}, {"start": 285.12, "end": 287.32, "text": " Five, you know what?"}, {"start": 287.32, "end": 290.76, "text": " Let's look at a few more fantasy examples."}, {"start": 290.76, "end": 292.96000000000004, "text": " Here are some of my favorites."}, {"start": 292.96000000000004, "end": 298.88, "text": " Six, now these are diffusion based models, which means that they start out from a bunch"}, {"start": 298.88, "end": 305.4, "text": " of noise and slowly adjust the pixels of this image to resemble the input text prompts"}, {"start": 305.4, "end": 306.96, "text": " a little more."}, {"start": 306.96, "end": 312.28, "text": " Hence, they are very sensitive to the initial noise patterns that we start out from."}, {"start": 312.28, "end": 318.56, "text": " Andr\u00e9 Carpethy found an amazing way to take advantage of this property by adjusting"}, {"start": 318.56, "end": 325.2, "text": " the noise, but just a tiny bit and create many new, similar images."}, {"start": 325.2, "end": 329.96, "text": " As this together, it results in a hypnotic video like this one."}, {"start": 329.96, "end": 332.96, "text": " Random noise walks if you will."}, {"start": 332.96, "end": 333.96, "text": " Loving it."}, {"start": 333.96, "end": 340.52, "text": " It can generate not only images, but with a little additional work, even animations."}, {"start": 340.52, "end": 343.52, "text": " Look, you are going to love this one."}, {"start": 343.52, "end": 350.15999999999997, "text": " This was made by creating the same image with the eyes open and closed, and with the additional"}, {"start": 350.15999999999997, "end": 354.48, "text": " work of blending them together, it looks like this."}, {"start": 354.48, "end": 359.68, "text": " Once again, the links to all of these works are available in the video description if you"}, {"start": 359.68, "end": 362.48, "text": " wish to have a closer look at the process."}, {"start": 362.48, "end": 369.88, "text": " Eight, you remember that it can create fantastic portraits and it can interpolate between them."}, {"start": 369.88, "end": 376.40000000000003, "text": " Now putting it all together, it can create portraits and interpolate between them, creating"}, {"start": 376.40000000000003, "end": 381.40000000000003, "text": " this sometimes smooth, sometimes a little jumpy videos."}, {"start": 381.4, "end": 386.32, "text": " And don't forget, mine, variant generation is still possible."}, {"start": 386.32, "end": 392.67999999999995, "text": " We can still give it an input image and since it understands what this image depicts, it"}, {"start": 392.67999999999995, "end": 396.67999999999995, "text": " can also repaint it in different variations."}, {"start": 396.67999999999995, "end": 398.47999999999996, "text": " And finally, ten."}, {"start": 398.47999999999996, "end": 404.32, "text": " The fact that these amazing images come out of stable diffusion does not mean that we"}, {"start": 404.32, "end": 406.79999999999995, "text": " have to use them in their entirety."}, {"start": 406.8, "end": 413.0, "text": " If there is just one part of an image that we like, be it the night on a horse or the"}, {"start": 413.0, "end": 415.88, "text": " castle, that is more than enough."}, {"start": 415.88, "end": 423.08000000000004, "text": " We can discard the rest of the image and just use the parts that we love best and make an"}, {"start": 423.08000000000004, "end": 425.52, "text": " awesome montage out of it."}, {"start": 425.52, "end": 431.52, "text": " Now we discussed that we can pop the hood and tinker with this AI that was one of the"}, {"start": 431.52, "end": 438.44, "text": " amazing reasons behind these results, but I promised two reasons why this is so good."}, {"start": 438.44, "end": 441.32, "text": " So what is reason number two?"}, {"start": 441.32, "end": 447.0, "text": " Is it possible that yes, yes, yes, this is the moment you have been waiting for."}, {"start": 447.0, "end": 449.28, "text": " You can now try it yourself."}, {"start": 449.28, "end": 455.64, "text": " If you are patient, you can engage in changing the internal parameters here and get some"}, {"start": 455.64, "end": 457.2, "text": " amazing variants."}, {"start": 457.2, "end": 462.76, "text": " You might have to wait for a bit, but as of the making of this video, it works."}, {"start": 462.76, "end": 468.8, "text": " Now what happens when you follow scholars get over there who really knows we have crashed"}, {"start": 468.8, "end": 472.71999999999997, "text": " plenty of websites before with our scholarly stampede."}, {"start": 472.71999999999997, "end": 477.59999999999997, "text": " And if you don't want to wait or wish to run some more advanced experiments, you can"}, {"start": 477.59999999999997, "end": 482.76, "text": " run the model yourself at home on a consumer graphics card."}, {"start": 482.76, "end": 487.36, "text": " And if you are unable to try it, don't despair."}, {"start": 487.36, "end": 492.96, "text": " AI-based image generation is only getting cheaper and more democratized from here on out."}, {"start": 492.96, "end": 498.56, "text": " So a little open source competition for open AI and Google."}, {"start": 498.56, "end": 500.64, "text": " What a time to be alive."}, {"start": 500.64, "end": 506.84, "text": " And please, as always, whatever you do, do not forget to apply the first law of papers"}, {"start": 506.84, "end": 509.68, "text": " which says that research is a process."}, {"start": 509.68, "end": 515.32, "text": " And not look at where we are, look at where we will be two more papers down the line."}, {"start": 515.32, "end": 523.96, "text": " Here are some results from Dolly 1 and just a year later Dolly 2 was capable of this."}, {"start": 523.96, "end": 526.0, "text": " Just a year later."}, {"start": 526.0, "end": 528.28, "text": " That is unbelievable."}, {"start": 528.28, "end": 532.5600000000001, "text": " Just imagine what we will be able to do five years from now."}, {"start": 532.5600000000001, "end": 536.84, "text": " If you have some ideas, make sure to leave a comment about that below."}, {"start": 536.84, "end": 544.6800000000001, "text": " So finally, stable diffusion, a free and open source solution for AI-based image generation."}, {"start": 544.6800000000001, "end": 546.5600000000001, "text": " Double thumbs up."}, {"start": 546.5600000000001, "end": 552.12, "text": " This is something for everyone out there and it really shows the power of collaboration"}, {"start": 552.12, "end": 557.96, "text": " as us tinkerers around the world who work together to make something amazing."}, {"start": 557.96, "end": 559.6, "text": " I love it."}, {"start": 559.6, "end": 561.2, "text": " Thank you so much."}, {"start": 561.2, "end": 566.8000000000001, "text": " And note that all this took about $600,000 to train."}, {"start": 566.8, "end": 568.68, "text": " Now make no mistake."}, {"start": 568.68, "end": 570.4399999999999, "text": " That is a lot of dollars."}, {"start": 570.4399999999999, "end": 576.52, "text": " But this also means that creating an AI like this does not cost tens of millions of dollars"}, {"start": 576.52, "end": 577.52, "text": " anymore."}, {"start": 577.52, "end": 584.28, "text": " And the team at Stability AI is already working on a smaller and cheaper model than this."}, {"start": 584.28, "end": 592.4399999999999, "text": " So we are now entering not only the age of AI-based image generation, but the age of free and"}, {"start": 592.4399999999999, "end": 595.24, "text": " open AI-based image generation."}, {"start": 595.24, "end": 599.4, "text": " Oh yes, and for now, let the experiments begin."}, {"start": 599.4, "end": 606.36, "text": " If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices"}, {"start": 606.36, "end": 609.8, "text": " in the world for GPU cloud compute."}, {"start": 609.8, "end": 612.76, "text": " No commitments or negotiation required."}, {"start": 612.76, "end": 615.88, "text": " Just sign up and launch an instance."}, {"start": 615.88, "end": 623.48, "text": " And hold on to your papers because with Lambda GPU cloud, you can get on-demand A100 instances"}, {"start": 623.48, "end": 629.84, "text": " for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 629.84, "end": 632.9200000000001, "text": " That's 73% savings."}, {"start": 632.9200000000001, "end": 636.4, "text": " Did I mention they also offer persistent storage?"}, {"start": 636.4, "end": 644.5600000000001, "text": " So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances,"}, {"start": 644.5600000000001, "end": 646.96, "text": " workstations or servers."}, {"start": 646.96, "end": 653.4, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 653.4, "end": 654.4, "text": " instances today."}, {"start": 654.4, "end": 683.4, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=jRMy6lxlqjM
A 1,000,000,000 Particle Simulation! 🌊
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "A Fast Unsmoothed Aggregation Algebraic Multigrid Framework for the Large-Scale Simulation of Incompressible Flow" is available here: http://computationalsciences.org/publications/shao-2022-multigrid.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karo Jona Ifehir. Today we are going to find out whether it is possible to write a fluid simulation on a computer that showcases no less than 1 billion particles. So, is this the one? Maybe. Of course, I will tell you in a moment. But note that there are plenty of fluid and physics simulation research papers out there. This is a mature research field and we can do so much today. Here are four of my favorite works in this area and then we will talk about what is the problem with them. One, in reality, we can experiment with whatever objects we have at our disposal, but in a simulation, we can do anything. Including changing the physical parameters of these objects and thus, this previous work can simulate three yellow blocks of different stiffness values. So cool! Two, this technique simulates strong two-way coupling. What does that mean? Well, you see a simulation here that doesn't have it. So is this correct? Well, not quite. The honey should be supporting the dipper and it not only falls, but it falls in a very unnatural way. Instead, this technique can simulate strong two-way coupling and finally it gets it right. And not only that, but what I really love about this is that it also gets small nuances right. I will try to speed up the footage a little so you can see that the honey doesn't only support the dipper, but the dipper still has some subtle movements both in reality and in the simulation. A plus, love it! Three, we can even simulate the physics of baking on a computer. Let's see. Yes, expansion and baking is happening here. Are we done? Well, let's have a look inside. Yep, this is a good one. Yum! And perhaps my favorite work in this area is this. Four, now you may be wondering, Karoy, why are you showing a real video to me? Well, this is not a real video. This is a simulation. And so is this one. And hold onto your papers because this is not a screenshot taken from the simulation. Nuh-uh. No, sir. This is a real photo. How cool is that? Whatever you feel a little sad, just think about the fact that through the power of computer graphics research, we can simulate reality on our computers. At least a small slice of reality. Absolutely amazing. However, there is a problem. And the problem is that all of these simulations have a price. And that price is time. Oh yes, some of these works require long, all nighters to compute. And unfortunately, as a research field matures, it gets more and more difficult to say something new and improve upon previous work. So, can we expect no more meaningful speedups at this point? Are we at a saturation point? Well, just two years ago, we talked about a technique that was able to simulate 100 million particles. My goodness. This was absolutely amazing. And now, just a couple more papers down the line, we go 10x. Yes, that's right. Here is an even better one that is finally able to simulate 10 times as many particles. Yes, 1 billion particles. Now hold onto your papers and let's marvel together at this glorious footage. This is a scene with 1 billion. Can that really be? Well, I can hardly believe it, but here it is. Look, that is a beautiful simulation. I love it. Now this technique is not only able to simulate this many particles, but it also does it faster than previous techniques. The speedup factor is typically 3x, but not on honeybuckling scenes. Why is that? Well, the more viscous the fluids are, at the risk of simplification, let's say that the more honey like they are, the higher the speedup. How much higher? What? Yes, that's right. On this honeybuckling scene, it is 15x faster. What took nearly two onliters now only takes an hour that is insanity. What a time to be alive. Now note that these simulations still take a while about half a minute per frame for 150 million particles and about 10 minutes per frame for a billion particles. But for a simulation of this quality, that is not a lot at all. Sign me up right now. So as you see, even for such a mature research field as fluid simulations in computer graphics, the pace of progress is nothing short of amazing. You see a lot of videos on what AI techniques are capable of today and what you see here is through the power of sheer human ingenuity. So what do you think? Does this get your mind going? Let me know in the comments below. Waits and biases provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that waits and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Jona Ifehir."}, {"start": 4.76, "end": 11.88, "text": " Today we are going to find out whether it is possible to write a fluid simulation on a computer"}, {"start": 11.88, "end": 16.96, "text": " that showcases no less than 1 billion particles."}, {"start": 16.96, "end": 19.2, "text": " So, is this the one?"}, {"start": 19.2, "end": 20.28, "text": " Maybe."}, {"start": 20.28, "end": 22.72, "text": " Of course, I will tell you in a moment."}, {"start": 22.72, "end": 28.36, "text": " But note that there are plenty of fluid and physics simulation research papers out there."}, {"start": 28.36, "end": 33.76, "text": " This is a mature research field and we can do so much today."}, {"start": 33.76, "end": 38.76, "text": " Here are four of my favorite works in this area and then we will talk about what is the"}, {"start": 38.76, "end": 40.24, "text": " problem with them."}, {"start": 40.24, "end": 46.480000000000004, "text": " One, in reality, we can experiment with whatever objects we have at our disposal, but in"}, {"start": 46.480000000000004, "end": 49.68, "text": " a simulation, we can do anything."}, {"start": 49.68, "end": 55.28, "text": " Including changing the physical parameters of these objects and thus, this previous work"}, {"start": 55.28, "end": 60.32, "text": " can simulate three yellow blocks of different stiffness values."}, {"start": 60.32, "end": 61.32, "text": " So cool!"}, {"start": 61.32, "end": 66.04, "text": " Two, this technique simulates strong two-way coupling."}, {"start": 66.04, "end": 67.28, "text": " What does that mean?"}, {"start": 67.28, "end": 71.4, "text": " Well, you see a simulation here that doesn't have it."}, {"start": 71.4, "end": 73.24000000000001, "text": " So is this correct?"}, {"start": 73.24000000000001, "end": 75.12, "text": " Well, not quite."}, {"start": 75.12, "end": 81.12, "text": " The honey should be supporting the dipper and it not only falls, but it falls in a very"}, {"start": 81.12, "end": 82.6, "text": " unnatural way."}, {"start": 82.6, "end": 90.0, "text": " Instead, this technique can simulate strong two-way coupling and finally it gets it right."}, {"start": 90.0, "end": 96.6, "text": " And not only that, but what I really love about this is that it also gets small nuances"}, {"start": 96.6, "end": 97.6, "text": " right."}, {"start": 97.6, "end": 102.96, "text": " I will try to speed up the footage a little so you can see that the honey doesn't only"}, {"start": 102.96, "end": 109.52, "text": " support the dipper, but the dipper still has some subtle movements both in reality and"}, {"start": 109.52, "end": 110.75999999999999, "text": " in the simulation."}, {"start": 110.76, "end": 113.76, "text": " A plus, love it!"}, {"start": 113.76, "end": 119.08000000000001, "text": " Three, we can even simulate the physics of baking on a computer."}, {"start": 119.08000000000001, "end": 120.4, "text": " Let's see."}, {"start": 120.4, "end": 124.80000000000001, "text": " Yes, expansion and baking is happening here."}, {"start": 124.80000000000001, "end": 125.80000000000001, "text": " Are we done?"}, {"start": 125.80000000000001, "end": 128.04000000000002, "text": " Well, let's have a look inside."}, {"start": 128.04000000000002, "end": 130.64000000000001, "text": " Yep, this is a good one."}, {"start": 130.64000000000001, "end": 131.84, "text": " Yum!"}, {"start": 131.84, "end": 135.96, "text": " And perhaps my favorite work in this area is this."}, {"start": 135.96, "end": 142.32000000000002, "text": " Four, now you may be wondering, Karoy, why are you showing a real video to me?"}, {"start": 142.32000000000002, "end": 145.72, "text": " Well, this is not a real video."}, {"start": 145.72, "end": 147.96, "text": " This is a simulation."}, {"start": 147.96, "end": 149.88, "text": " And so is this one."}, {"start": 149.88, "end": 155.44, "text": " And hold onto your papers because this is not a screenshot taken from the simulation."}, {"start": 155.44, "end": 156.44, "text": " Nuh-uh."}, {"start": 156.44, "end": 158.16, "text": " No, sir."}, {"start": 158.16, "end": 160.20000000000002, "text": " This is a real photo."}, {"start": 160.20000000000002, "end": 162.24, "text": " How cool is that?"}, {"start": 162.24, "end": 167.20000000000002, "text": " Whatever you feel a little sad, just think about the fact that through the power of computer"}, {"start": 167.20000000000002, "end": 172.36, "text": " graphics research, we can simulate reality on our computers."}, {"start": 172.36, "end": 175.44, "text": " At least a small slice of reality."}, {"start": 175.44, "end": 177.20000000000002, "text": " Absolutely amazing."}, {"start": 177.20000000000002, "end": 180.44, "text": " However, there is a problem."}, {"start": 180.44, "end": 184.92000000000002, "text": " And the problem is that all of these simulations have a price."}, {"start": 184.92000000000002, "end": 186.92000000000002, "text": " And that price is time."}, {"start": 186.92, "end": 192.83999999999997, "text": " Oh yes, some of these works require long, all nighters to compute."}, {"start": 192.83999999999997, "end": 198.51999999999998, "text": " And unfortunately, as a research field matures, it gets more and more difficult to say something"}, {"start": 198.51999999999998, "end": 201.48, "text": " new and improve upon previous work."}, {"start": 201.48, "end": 206.56, "text": " So, can we expect no more meaningful speedups at this point?"}, {"start": 206.56, "end": 208.88, "text": " Are we at a saturation point?"}, {"start": 208.88, "end": 215.44, "text": " Well, just two years ago, we talked about a technique that was able to simulate 100"}, {"start": 215.44, "end": 217.35999999999999, "text": " million particles."}, {"start": 217.35999999999999, "end": 218.96, "text": " My goodness."}, {"start": 218.96, "end": 221.4, "text": " This was absolutely amazing."}, {"start": 221.4, "end": 226.52, "text": " And now, just a couple more papers down the line, we go 10x."}, {"start": 226.52, "end": 228.92, "text": " Yes, that's right."}, {"start": 228.92, "end": 235.16, "text": " Here is an even better one that is finally able to simulate 10 times as many particles."}, {"start": 235.16, "end": 238.64, "text": " Yes, 1 billion particles."}, {"start": 238.64, "end": 244.64, "text": " Now hold onto your papers and let's marvel together at this glorious footage."}, {"start": 244.64, "end": 247.6, "text": " This is a scene with 1 billion."}, {"start": 247.6, "end": 248.92, "text": " Can that really be?"}, {"start": 248.92, "end": 252.32, "text": " Well, I can hardly believe it, but here it is."}, {"start": 252.32, "end": 255.72, "text": " Look, that is a beautiful simulation."}, {"start": 255.72, "end": 257.56, "text": " I love it."}, {"start": 257.56, "end": 264.68, "text": " Now this technique is not only able to simulate this many particles, but it also does it faster"}, {"start": 264.68, "end": 266.32, "text": " than previous techniques."}, {"start": 266.32, "end": 272.44, "text": " The speedup factor is typically 3x, but not on honeybuckling scenes."}, {"start": 272.44, "end": 273.76, "text": " Why is that?"}, {"start": 273.76, "end": 279.15999999999997, "text": " Well, the more viscous the fluids are, at the risk of simplification, let's say that"}, {"start": 279.15999999999997, "end": 283.12, "text": " the more honey like they are, the higher the speedup."}, {"start": 283.12, "end": 284.71999999999997, "text": " How much higher?"}, {"start": 284.71999999999997, "end": 285.71999999999997, "text": " What?"}, {"start": 285.71999999999997, "end": 287.88, "text": " Yes, that's right."}, {"start": 287.88, "end": 292.56, "text": " On this honeybuckling scene, it is 15x faster."}, {"start": 292.56, "end": 299.4, "text": " What took nearly two onliters now only takes an hour that is insanity."}, {"start": 299.4, "end": 301.2, "text": " What a time to be alive."}, {"start": 301.2, "end": 308.36, "text": " Now note that these simulations still take a while about half a minute per frame for 150"}, {"start": 308.36, "end": 314.28, "text": " million particles and about 10 minutes per frame for a billion particles."}, {"start": 314.28, "end": 319.91999999999996, "text": " But for a simulation of this quality, that is not a lot at all."}, {"start": 319.91999999999996, "end": 321.96, "text": " Sign me up right now."}, {"start": 321.96, "end": 328.52, "text": " So as you see, even for such a mature research field as fluid simulations in computer graphics,"}, {"start": 328.52, "end": 331.52, "text": " the pace of progress is nothing short of amazing."}, {"start": 331.52, "end": 337.79999999999995, "text": " You see a lot of videos on what AI techniques are capable of today and what you see here"}, {"start": 337.79999999999995, "end": 342.0, "text": " is through the power of sheer human ingenuity."}, {"start": 342.0, "end": 343.84, "text": " So what do you think?"}, {"start": 343.84, "end": 345.56, "text": " Does this get your mind going?"}, {"start": 345.56, "end": 347.28, "text": " Let me know in the comments below."}, {"start": 347.28, "end": 352.79999999999995, "text": " Waits and biases provides tools to track your experiments in your deep learning projects."}, {"start": 352.79999999999995, "end": 357.91999999999996, "text": " Using their system, you can create beautiful reports like this one to explain your findings"}, {"start": 357.92, "end": 359.92, "text": " to your colleagues better."}, {"start": 359.92, "end": 366.88, "text": " It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more."}, {"start": 366.88, "end": 372.92, "text": " And the best part is that waits and biases is free for all individuals, academics and"}, {"start": 372.92, "end": 374.68, "text": " open source projects."}, {"start": 374.68, "end": 380.92, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 380.92, "end": 384.6, "text": " description and you can get a free demo today."}, {"start": 384.6, "end": 395.32000000000005, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=FCf8OA4GPvI
OpenAI’s DALL-E 2 - AI-Based Art Is Here! 🧑‍🎨
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://openai.com/dall-e-2/ https://openai.com/blog/dall-e-2-extending-creativity/ 🧑‍🎨 Check out Felícia Zsolnai-Fehér's works: https://www.instagram.com/feliciart_86/ 🧑‍🎨 Judit Somogyvári's works: https://www.artstation.com/sheyenne https://www.instagram.com/somogyvari.art/ Credits: Stan Brown - https://www.artstation.com/artwork/Le94a5 Road maker - https://twitter.com/Bbbn192/status/1550150050562674692 GuyP - https://twitter.com/GuyP/status/1552681944437166081 + https://dallery.gallery/free-photo-image-editing-tools-ai-dalle/ Weathered man - https://www.reddit.com/r/dalle2/comments/vow7vs/a_photo_of_weathered_looking_man_with_dirt_on_his/ Cosmopolitan - https://www.cosmopolitan.com/lifestyle/a40314356/dall-e-2-artificial-intelligence-cover/ Toilet car: https://twitter.com/PaulYacoubian/status/1514955904659173387/photo/2 OpenAI - Extending Creativity https://openai.com/blog/dall-e-2-extending-creativity/ AI assisted shoe design - https://www.instagram.com/reel/Cehg3WbpaN5/ AI assisted dress design - https://twitter.com/paultrillo/status/1562106954096381952 +1: seamless texture full workflow: https://twitter.com/pushmatrix/status/1564988182780846083 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-357336/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 Teaser 0:10 What is DALL-E 2? 1:22 Is it as good as human artists? 2:10 1 - Texture synthesis 4:53 2 - Photorealistic faces! 5:36 3 - Cosmopolitan 6:00 4 - Product design 6:18 5 - 3D information 6:58 Building virtual worlds 7:54 +1 - Product design, but better! Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir. These amazing images were all made by an AI called Dolly II. This AI is endowed with a diffusion-based model which means that when we ask it something, it starts out from noise and over time it iteratively refines this image to match our description better. And over time, magically an absolutely incredible image emerges. This is a neural network that was given a ton of images and a piece of text that says what is in this image. That is one image caption pair. Dolly II is given millions and millions of these pairs. So, what can it do with all this knowledge? The key takeaway here is that it does know or very little copying from this training data, but it truly comes up with novel images. How? Well, after it had seen a bunch of images of koalas and separately a bunch of images of motorcycles, it starts to understand the concept of both and it will be able to combine the two together into a completely new image. Previously, we also compared these works to what real human artists are capable of. And in my opinion, this tool should not be framed as theme humanity versus theme AI. I believe this should be framed as theme humanity supercharged by theme AI. Here is an example of that. And the results are so good and it is so easy to use, I said on several occasions that this is going to change the world of art creation as we know it. And you are an experienced fellow scholar, so you may ask the question, okay, but how? So, here are five beautiful examples of how artists are already using it. One, texture synthesis. Oh yes, whenever we see these beautiful animation movies or video games, we see a lot of real looking content in there. The roads are realistic, the buildings and hills are also realistic. We can get a great deal of help in making them this realistic by taking photographs of real objects and importing them into our virtual world. But there is a problem. What is the problem? Well, let's see together. Let's take a photo. There is a stain on the wall. Is that the problem? No, that's not the problem. We can cut that out. Now, we need to extend this texture, but since this is just a small snippet and we have long walls and big buildings to fill in a video game, so what do we do? Well, we start tiling it. And, oh oh, now that is the problem. It is quite visible what happened to this texture. It has been extended, but at the cost of seams and discontinuities appearing in the image. So, how do we get a seamless texture from this? Well, Dolly too, hopefully understands that this is a rock wall and can it fix this? Now, hold on to your papers and let's see together. Yes, it can. Amazing. Now, let's have a look at it attached to a real virtual object and see how it looks in practice. Oh, my, this is gorgeous. I love it. Here is an even more difficult example with paint peeling off the wall. This is the real world phone photo and the Tad version with the seams. And now Dolly too. Wow! I could never tell that this image is not a real photo and have a look at it added to a video game. So good. By the way, these are Stan Brown's works who used Dolly too to great effect here. Make sure to check out his work. For instance, he also made these amazing materials for one of my favorite games of all time, Path of Exile. The link to his work is available in the video description. And just imagine using the free 3D modeling tool, blender, and this plugin to create roads. And you can make it an asphalt road, dirt road, or make it look like pavement in just a couple of clicks with Dolly too. Two, Dolly too is now able to generate real human faces too and sometimes they come out really well. And sometimes not so much. No matter in these cases we can use a face restoration program to fix some of the bigger flaws and run it through a super resolution program to add more details and bam! There we go. Fantastic. And if we have a portrait that works for us, we can plug it into this amazing tool called Deepnostelja to create a virtual animated version of this person. How cool is that? What a time to be alive! Three, this one speaks for itself. The Cosmopolitan magazine made their first cover from an image that was made by an AI. No tricks, no frills, just a good old image straight out of Dolly too. And the kicker is that some of them are already good enough to become magazine covers. That is incredible. Talk about the age of AI image creation. Four, this tool can even help us create new product designs. And I mean not this one, although this toilet car might have some utility if you need to go when you're on the go, but I meant this one instead. And finally, five, we can also combine Dolly too with other tools that take a 2D image and try to build an understanding as if it were a 3D scene. And the results are truly stunning. We can get so much more out of these images. And one of the important lessons here is that not only AI-generated art is improving at an incredible pace, but an additional lesson is that everything is connected. So many of these tools can be made even more powerful if we use our imagination and combine them correctly. This is huge. For instance, Nvidia is already building virtual copies of the real world around us. A copy of the real world, why is that? Well, because this way we can re-enact real situations for their self-driving cars and even create arbitrarily complex new situations and these AI's will be able to learn from them in a safe environment. Self-driving can also be combined with a number of other technologies that make it an even more amazing experience. See the assistant there? It sees and identifies the passenger and understands natural language. And also understands which building is which and what shows are played in them today. So we might jump into the car, not even knowing where we wish to go, and it would not only help us find a good show, but it would also drive us there. And plus one, remember the shoe design example, how about trying on pieces of clothing, but not one by one as we do in real life, but trying on all of the clothes at the same time. Yes, these are all AI-generated images and it is not just one still image, but video footage where the camera moves around. This is spectacular. Imagine what buying clothes will look like when we have algorithms like this. Wow, or if an actor did not have the appropriate attire for a scene in a movie, not to worry, just change it in post with the AI. How cool is that? You see, once again, everything is connected. So, does this get your mind going? What would you combine with Dolly too? If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance and hold on to your papers because with Lambda GPU cloud, you can get on demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage, so join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 11.0, "text": " These amazing images were all made by an AI called Dolly II."}, {"start": 11.0, "end": 17.0, "text": " This AI is endowed with a diffusion-based model which means that when we ask it something,"}, {"start": 17.0, "end": 26.0, "text": " it starts out from noise and over time it iteratively refines this image to match our description better."}, {"start": 26.0, "end": 32.0, "text": " And over time, magically an absolutely incredible image emerges."}, {"start": 32.0, "end": 41.0, "text": " This is a neural network that was given a ton of images and a piece of text that says what is in this image."}, {"start": 41.0, "end": 49.0, "text": " That is one image caption pair. Dolly II is given millions and millions of these pairs."}, {"start": 49.0, "end": 52.0, "text": " So, what can it do with all this knowledge?"}, {"start": 52.0, "end": 58.0, "text": " The key takeaway here is that it does know or very little copying from this training data,"}, {"start": 58.0, "end": 63.0, "text": " but it truly comes up with novel images."}, {"start": 63.0, "end": 72.0, "text": " How? Well, after it had seen a bunch of images of koalas and separately a bunch of images of motorcycles,"}, {"start": 72.0, "end": 81.0, "text": " it starts to understand the concept of both and it will be able to combine the two together into a completely new image."}, {"start": 81.0, "end": 88.0, "text": " Previously, we also compared these works to what real human artists are capable of."}, {"start": 88.0, "end": 95.0, "text": " And in my opinion, this tool should not be framed as theme humanity versus theme AI."}, {"start": 95.0, "end": 101.0, "text": " I believe this should be framed as theme humanity supercharged by theme AI."}, {"start": 101.0, "end": 104.0, "text": " Here is an example of that."}, {"start": 104.0, "end": 116.0, "text": " And the results are so good and it is so easy to use, I said on several occasions that this is going to change the world of art creation as we know it."}, {"start": 116.0, "end": 124.0, "text": " And you are an experienced fellow scholar, so you may ask the question, okay, but how?"}, {"start": 124.0, "end": 130.0, "text": " So, here are five beautiful examples of how artists are already using it."}, {"start": 130.0, "end": 142.0, "text": " One, texture synthesis. Oh yes, whenever we see these beautiful animation movies or video games, we see a lot of real looking content in there."}, {"start": 142.0, "end": 147.0, "text": " The roads are realistic, the buildings and hills are also realistic."}, {"start": 147.0, "end": 158.0, "text": " We can get a great deal of help in making them this realistic by taking photographs of real objects and importing them into our virtual world."}, {"start": 158.0, "end": 164.0, "text": " But there is a problem. What is the problem? Well, let's see together."}, {"start": 164.0, "end": 173.0, "text": " Let's take a photo. There is a stain on the wall. Is that the problem? No, that's not the problem. We can cut that out."}, {"start": 173.0, "end": 186.0, "text": " Now, we need to extend this texture, but since this is just a small snippet and we have long walls and big buildings to fill in a video game, so what do we do?"}, {"start": 186.0, "end": 192.0, "text": " Well, we start tiling it. And, oh oh, now that is the problem."}, {"start": 192.0, "end": 203.0, "text": " It is quite visible what happened to this texture. It has been extended, but at the cost of seams and discontinuities appearing in the image."}, {"start": 203.0, "end": 214.0, "text": " So, how do we get a seamless texture from this? Well, Dolly too, hopefully understands that this is a rock wall and can it fix this?"}, {"start": 214.0, "end": 221.0, "text": " Now, hold on to your papers and let's see together. Yes, it can. Amazing."}, {"start": 221.0, "end": 229.0, "text": " Now, let's have a look at it attached to a real virtual object and see how it looks in practice."}, {"start": 229.0, "end": 239.0, "text": " Oh, my, this is gorgeous. I love it. Here is an even more difficult example with paint peeling off the wall."}, {"start": 239.0, "end": 248.0, "text": " This is the real world phone photo and the Tad version with the seams. And now Dolly too. Wow!"}, {"start": 248.0, "end": 257.0, "text": " I could never tell that this image is not a real photo and have a look at it added to a video game. So good."}, {"start": 257.0, "end": 272.0, "text": " By the way, these are Stan Brown's works who used Dolly too to great effect here. Make sure to check out his work. For instance, he also made these amazing materials for one of my favorite games of all time, Path of Exile."}, {"start": 272.0, "end": 284.0, "text": " The link to his work is available in the video description. And just imagine using the free 3D modeling tool, blender, and this plugin to create roads."}, {"start": 284.0, "end": 293.0, "text": " And you can make it an asphalt road, dirt road, or make it look like pavement in just a couple of clicks with Dolly too."}, {"start": 293.0, "end": 304.0, "text": " Two, Dolly too is now able to generate real human faces too and sometimes they come out really well. And sometimes not so much."}, {"start": 304.0, "end": 317.0, "text": " No matter in these cases we can use a face restoration program to fix some of the bigger flaws and run it through a super resolution program to add more details and bam!"}, {"start": 317.0, "end": 332.0, "text": " There we go. Fantastic. And if we have a portrait that works for us, we can plug it into this amazing tool called Deepnostelja to create a virtual animated version of this person."}, {"start": 332.0, "end": 346.0, "text": " How cool is that? What a time to be alive! Three, this one speaks for itself. The Cosmopolitan magazine made their first cover from an image that was made by an AI."}, {"start": 346.0, "end": 358.0, "text": " No tricks, no frills, just a good old image straight out of Dolly too. And the kicker is that some of them are already good enough to become magazine covers."}, {"start": 358.0, "end": 379.0, "text": " That is incredible. Talk about the age of AI image creation. Four, this tool can even help us create new product designs. And I mean not this one, although this toilet car might have some utility if you need to go when you're on the go, but I meant this one instead."}, {"start": 379.0, "end": 391.0, "text": " And finally, five, we can also combine Dolly too with other tools that take a 2D image and try to build an understanding as if it were a 3D scene."}, {"start": 391.0, "end": 408.0, "text": " And the results are truly stunning. We can get so much more out of these images. And one of the important lessons here is that not only AI-generated art is improving at an incredible pace, but an additional lesson is that everything is connected."}, {"start": 408.0, "end": 424.0, "text": " So many of these tools can be made even more powerful if we use our imagination and combine them correctly. This is huge. For instance, Nvidia is already building virtual copies of the real world around us."}, {"start": 424.0, "end": 443.0, "text": " A copy of the real world, why is that? Well, because this way we can re-enact real situations for their self-driving cars and even create arbitrarily complex new situations and these AI's will be able to learn from them in a safe environment."}, {"start": 443.0, "end": 458.0, "text": " Self-driving can also be combined with a number of other technologies that make it an even more amazing experience. See the assistant there? It sees and identifies the passenger and understands natural language."}, {"start": 458.0, "end": 475.0, "text": " And also understands which building is which and what shows are played in them today. So we might jump into the car, not even knowing where we wish to go, and it would not only help us find a good show, but it would also drive us there."}, {"start": 475.0, "end": 490.0, "text": " And plus one, remember the shoe design example, how about trying on pieces of clothing, but not one by one as we do in real life, but trying on all of the clothes at the same time."}, {"start": 490.0, "end": 508.0, "text": " Yes, these are all AI-generated images and it is not just one still image, but video footage where the camera moves around. This is spectacular. Imagine what buying clothes will look like when we have algorithms like this."}, {"start": 508.0, "end": 523.0, "text": " Wow, or if an actor did not have the appropriate attire for a scene in a movie, not to worry, just change it in post with the AI. How cool is that? You see, once again, everything is connected."}, {"start": 523.0, "end": 538.0, "text": " So, does this get your mind going? What would you combine with Dolly too? If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute."}, {"start": 538.0, "end": 562.0, "text": " No commitments or negotiation required. Just sign up and launch an instance and hold on to your papers because with Lambda GPU cloud, you can get on demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings."}, {"start": 562.0, "end": 576.0, "text": " Did I mention they also offer persistent storage, so join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers."}, {"start": 576.0, "end": 588.0, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=uboj01Gfy1A
Microsoft’s New AI: The Selfies Of The Future! 🤳
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "GRAM-HD: 3D-Consistent Image Generation at High Resolution with Generative Radiance Manifolds" is available here: https://jeffreyxiang.github.io/GRAM-HD/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to use an AI to take a collection of photos and hopefully create a digital human out of them. However, not so fast. For instance, have a look at Nvidia's amazing Feast Generator AI. As you see, it can generate beautiful results and they even show us the illusion of these characters moving around. However, this does not have a strong concept of 3D information. These are 2D images and 3D consistency is not that great. What is that? Well, look, this person just one frame away is not exactly the same person. For instance, the hair moves around. This means that these images are not or directable or at least not easily. As you see here, it has improved a great deal over 2 years and one more research paper down the line, but it's still not quite there. So, find details, checkmark, 3D consistency, not quite there. So, let's have a look at solutions that have more 3D consistency. To be able to do that, we can take a collection of photos like these and magically create a video where we can fly through these photos. It is really crazy because this is possible today. For instance, here is Nvidia's method that can be trained to perform this in a matter of seconds. This is remarkable, especially that the input is only a handful of photos. Some information is given about the scene, but this is really not much. So, can we have a solution with 3D consistency? Yes, indeed, for these 3D consistency, checkmark. However, the amount of fine details in these outputs is not that great. Hmm, do you see the pattern here? Depending on which previous method we use, we either get tons of detail, but no 3D consistency or we can get our highly coveted 3D consistency. But then, the details are gone. So, is the dream dead? No digital humans for us? Well, don't despair and have a look at this new technique that promises one megapixel images that are also consistent. Details and consistency at the same time. When I first saw this paper, I said that I will believe it when I see it. This is Steiner, the previous technique and we see the problems that we now expect. The hair is not consistent, the earring is flickering a great deal, and there are other issues too. So, are the authors of the new paper claiming that they can solve all this? Well, now hold on to your papers and have a look at this. This is the new technique. Oh my goodness, the earring, facial features and the hair stay still as we rotate them, and it indeed shows the new angles correctly. I love it. And let's examine this phenomenon in a little more detail. Look at the hair here with Steiner. The previous technique, it looks decent, but it's not quite there. And I really wonder why that is. Let's zoom in and have a closer look. Oh yes, upon closer inspection, the hair is a bit of a mess. And with the new technique, now that is what I call consistency across the video frames. Smooth hair strands everywhere. So good. And what I absolutely loved about this new work is that it outperforms this previous technique, which is from how many years ago. Well, if you have been holding on to your papers, now squeeze that paper because this work is from not even one year ago, just from eight months ago. And you already see a meaningful improvement over that. That is insanity. Now, of course, not even this technique is perfect. I don't know for sure if the hair consistency is perfect here, and we are dealing with video compression artifacts or whether this is still not 100% there, but this is truly a great step forward. Also, here you see that the images are still not as detailed as the photos, but this seems to be a roadblock to me that is so much easier to solve than 3D consistency. My impression is that with this, the most difficult part of the task is already done, and just one or two more papers down the line, I am sure we will be seeing even more realistic virtual humans. So, can we enter into a virtual world as a digital human from just a collection of photos? Oh yes. Time to meet our beloved ones from afar or meet new people and play some games together. What a time to be alive. So, does this get your mind going? What would you use this for? Let me know in the comments below. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wmb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.72, "end": 13.76, "text": " Today, we are going to use an AI to take a collection of photos and hopefully create a digital human out of them."}, {"start": 13.76, "end": 16.16, "text": " However, not so fast."}, {"start": 16.16, "end": 21.36, "text": " For instance, have a look at Nvidia's amazing Feast Generator AI."}, {"start": 21.36, "end": 28.64, "text": " As you see, it can generate beautiful results and they even show us the illusion of these characters"}, {"start": 28.64, "end": 30.080000000000002, "text": " moving around."}, {"start": 30.080000000000002, "end": 35.04, "text": " However, this does not have a strong concept of 3D information."}, {"start": 35.04, "end": 40.4, "text": " These are 2D images and 3D consistency is not that great."}, {"start": 40.4, "end": 41.92, "text": " What is that?"}, {"start": 41.92, "end": 48.56, "text": " Well, look, this person just one frame away is not exactly the same person."}, {"start": 48.56, "end": 51.6, "text": " For instance, the hair moves around."}, {"start": 51.6, "end": 57.519999999999996, "text": " This means that these images are not or directable or at least not easily."}, {"start": 57.52, "end": 64.56, "text": " As you see here, it has improved a great deal over 2 years and one more research paper down the line,"}, {"start": 64.56, "end": 67.84, "text": " but it's still not quite there."}, {"start": 67.84, "end": 73.44, "text": " So, find details, checkmark, 3D consistency, not quite there."}, {"start": 73.44, "end": 78.88, "text": " So, let's have a look at solutions that have more 3D consistency."}, {"start": 78.88, "end": 83.44, "text": " To be able to do that, we can take a collection of photos like these"}, {"start": 83.44, "end": 89.28, "text": " and magically create a video where we can fly through these photos."}, {"start": 89.92, "end": 93.84, "text": " It is really crazy because this is possible today."}, {"start": 93.84, "end": 100.4, "text": " For instance, here is Nvidia's method that can be trained to perform this in a matter of seconds."}, {"start": 101.12, "end": 106.8, "text": " This is remarkable, especially that the input is only a handful of photos."}, {"start": 107.52, "end": 112.24, "text": " Some information is given about the scene, but this is really not much."}, {"start": 112.24, "end": 116.32, "text": " So, can we have a solution with 3D consistency?"}, {"start": 116.32, "end": 121.28, "text": " Yes, indeed, for these 3D consistency, checkmark."}, {"start": 121.28, "end": 126.32, "text": " However, the amount of fine details in these outputs is not that great."}, {"start": 126.32, "end": 129.35999999999999, "text": " Hmm, do you see the pattern here?"}, {"start": 129.35999999999999, "end": 134.24, "text": " Depending on which previous method we use, we either get tons of detail,"}, {"start": 134.24, "end": 141.12, "text": " but no 3D consistency or we can get our highly coveted 3D consistency."}, {"start": 141.12, "end": 143.6, "text": " But then, the details are gone."}, {"start": 144.32, "end": 148.08, "text": " So, is the dream dead? No digital humans for us?"}, {"start": 148.88, "end": 154.0, "text": " Well, don't despair and have a look at this new technique that promises"}, {"start": 154.0, "end": 157.84, "text": " one megapixel images that are also consistent."}, {"start": 158.48000000000002, "end": 161.76, "text": " Details and consistency at the same time."}, {"start": 161.76, "end": 166.4, "text": " When I first saw this paper, I said that I will believe it when I see it."}, {"start": 166.4, "end": 173.28, "text": " This is Steiner, the previous technique and we see the problems that we now expect."}, {"start": 173.28, "end": 178.32, "text": " The hair is not consistent, the earring is flickering a great deal,"}, {"start": 178.32, "end": 180.72, "text": " and there are other issues too."}, {"start": 180.72, "end": 186.48000000000002, "text": " So, are the authors of the new paper claiming that they can solve all this?"}, {"start": 186.48000000000002, "end": 191.52, "text": " Well, now hold on to your papers and have a look at this."}, {"start": 191.52, "end": 193.52, "text": " This is the new technique."}, {"start": 193.52, "end": 201.44, "text": " Oh my goodness, the earring, facial features and the hair stay still as we rotate them,"}, {"start": 201.44, "end": 204.64000000000001, "text": " and it indeed shows the new angles correctly."}, {"start": 205.36, "end": 206.4, "text": " I love it."}, {"start": 206.96, "end": 211.04000000000002, "text": " And let's examine this phenomenon in a little more detail."}, {"start": 211.36, "end": 213.44, "text": " Look at the hair here with Steiner."}, {"start": 213.92000000000002, "end": 218.16000000000003, "text": " The previous technique, it looks decent, but it's not quite there."}, {"start": 218.64000000000001, "end": 220.96, "text": " And I really wonder why that is."}, {"start": 220.96, "end": 224.08, "text": " Let's zoom in and have a closer look."}, {"start": 225.28, "end": 229.76000000000002, "text": " Oh yes, upon closer inspection, the hair is a bit of a mess."}, {"start": 230.24, "end": 236.0, "text": " And with the new technique, now that is what I call consistency across the video frames."}, {"start": 236.0, "end": 238.24, "text": " Smooth hair strands everywhere."}, {"start": 239.04000000000002, "end": 239.60000000000002, "text": " So good."}, {"start": 240.16, "end": 246.32, "text": " And what I absolutely loved about this new work is that it outperforms this previous technique,"}, {"start": 246.32, "end": 248.16, "text": " which is from how many years ago."}, {"start": 248.16, "end": 254.72, "text": " Well, if you have been holding on to your papers, now squeeze that paper because this work"}, {"start": 254.72, "end": 259.12, "text": " is from not even one year ago, just from eight months ago."}, {"start": 259.68, "end": 263.52, "text": " And you already see a meaningful improvement over that."}, {"start": 263.52, "end": 265.12, "text": " That is insanity."}, {"start": 265.68, "end": 269.12, "text": " Now, of course, not even this technique is perfect."}, {"start": 269.12, "end": 273.2, "text": " I don't know for sure if the hair consistency is perfect here,"}, {"start": 273.2, "end": 279.92, "text": " and we are dealing with video compression artifacts or whether this is still not 100% there,"}, {"start": 279.92, "end": 283.03999999999996, "text": " but this is truly a great step forward."}, {"start": 283.59999999999997, "end": 289.68, "text": " Also, here you see that the images are still not as detailed as the photos,"}, {"start": 289.68, "end": 296.4, "text": " but this seems to be a roadblock to me that is so much easier to solve than 3D consistency."}, {"start": 296.96, "end": 302.32, "text": " My impression is that with this, the most difficult part of the task is already done,"}, {"start": 302.32, "end": 308.56, "text": " and just one or two more papers down the line, I am sure we will be seeing even more realistic"}, {"start": 308.56, "end": 309.36, "text": " virtual humans."}, {"start": 310.08, "end": 316.48, "text": " So, can we enter into a virtual world as a digital human from just a collection of photos?"}, {"start": 317.2, "end": 324.71999999999997, "text": " Oh yes. Time to meet our beloved ones from afar or meet new people and play some games together."}, {"start": 324.71999999999997, "end": 326.08, "text": " What a time to be alive."}, {"start": 326.71999999999997, "end": 329.03999999999996, "text": " So, does this get your mind going?"}, {"start": 329.03999999999996, "end": 330.96, "text": " What would you use this for?"}, {"start": 330.96, "end": 335.91999999999996, "text": " Let me know in the comments below. This video has been supported by weights and biases."}, {"start": 335.91999999999996, "end": 342.08, "text": " They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts"}, {"start": 342.08, "end": 347.44, "text": " who discuss how they use learning based algorithms to solve real world problems."}, {"start": 347.44, "end": 354.96, "text": " They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more."}, {"start": 354.96, "end": 358.0, "text": " Perfect for a fellow scholar with an open mind."}, {"start": 358.0, "end": 365.44, "text": " Make sure to visit them through wmb.me slash gd or just click the link in the video description."}, {"start": 365.44, "end": 368.88, "text": " Our thanks to weights and biases for their long-standing support"}, {"start": 368.88, "end": 371.68, "text": " and for helping us make better videos for you."}, {"start": 371.68, "end": 388.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ICQIx_C8mHo
Is Google’s New AI As Smart As A Human? 🤖
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Minerva - Solving Quantitative Reasoning Problems with Language Models" is available here: https://arxiv.org/abs/2206.14858 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, and this is Two Minute Papers with Dr. Karojone Fahir. Today, we are going to have a look at Google's new AI and find out whether it is as smart as a human, at least in some test subjects. Not so long ago, DeepMind published an AI technique and its knowledge in the area of humanities, social sciences and medicine was fantastic, but at mathematics of all things not so much. Interestingly, it also thought that it is a genius and then, amusingly, it got quite confused by very simple questions. Look at that. Do geniuses mess up this multiplication? I think not, and that is a bit of a problem. After all, if these geniuses or non-genius AI's are too unreliable, we ask them something and might have no idea whether their solutions are correct or not. So, our question today is, can we improve the consistency of these AI's? Well, let's have a look at Google's new technique and put it to the real test. And I mean it. Yes, it is going to write a math test and so much more today and we will see how it performs. Is it as smart as a human? Can that really be? We will find out today together. And it turns out it has five amazing properties that I found really surprising. Here goes. One, it can solve math problems reliably and sometimes even in surprising ways. Look, here it solved the problem in a way that is not only correct, but is different from the typical reference solution. And the surprises don't stop there. This one can even deal with questions that include drawings which is stunning. Two, a proper AI today has to be relatively general and this means that it shouldn't only be good at math, it should be good at other subjects too. And its solutions to these physics questions are not only correct, but clear, crisp and beautiful at the same time. I absolutely love it. So yes, it can do not only math, but physics too. But three, I hope you know what's coming. Oh yes, physics and math are not the only two subjects it can deal with. The generality does not stop there. This AI is a true polymath as we can ask it about excretory organs in biology or even electrical engineering, chemistry, astronomy and hold onto your papers because it even knows some machine learning. Yes, a machine that learns about itself. How cool is that? What a time to be alive. Four, it's significantly outperforms previous techniques provided that we use the larger AI model. Here is a beautiful example where we can prompt it with a question and the smaller model fails. I wonder if the larger, more capable version of it would be able to get this right. Does it? Yes, so good. And five, here you see an example question from the 2022 Poland's National Math Exam. And now we are finally at the point where we can ask the question, is it as smart as a human? Well, if you have been holding onto your paper so far, now squeeze that paper because the answer is yes. Kind of. It not only solved this question correctly, but adding everything up, it scored above the national average. What's more, it can even solve some undergrad math problems straight from MIT. So, yes, at least on this particular test, it is at the very least as smart as a human. I am truly out of words. This means that the consistency of these solutions is finally at a point where we could use this as an AI science assistant. A junior assistant who messes up from time to time, mind you, but an assistant nonetheless. What a time to be alive. Interestingly, if we look under the hood, we see that it works by generating many candidate solutions and starts a vote between these solutions. And the ones with the most votes end up being the solutions that you've seen throughout this video. Now, not even this technique is perfect, you see some failure cases here. But once again, this is so much improvement, just one paper down the line. I wonder what happens if we apply the first law of papers which says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. So, what do you think these will be capable of? One more paper down the line? Does this get your mind going? Let me know in the comments below. This video has been supported by weights and biases. Check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, and this is Two Minute Papers with Dr. Karojone Fahir."}, {"start": 4.64, "end": 14.88, "text": " Today, we are going to have a look at Google's new AI and find out whether it is as smart as a human, at least in some test subjects."}, {"start": 14.88, "end": 29.560000000000002, "text": " Not so long ago, DeepMind published an AI technique and its knowledge in the area of humanities, social sciences and medicine was fantastic, but at mathematics of all things not so much."}, {"start": 29.56, "end": 39.16, "text": " Interestingly, it also thought that it is a genius and then, amusingly, it got quite confused by very simple questions."}, {"start": 39.16, "end": 40.839999999999996, "text": " Look at that."}, {"start": 40.839999999999996, "end": 44.32, "text": " Do geniuses mess up this multiplication?"}, {"start": 44.32, "end": 48.239999999999995, "text": " I think not, and that is a bit of a problem."}, {"start": 48.239999999999995, "end": 57.32, "text": " After all, if these geniuses or non-genius AI's are too unreliable, we ask them something and might have no idea"}, {"start": 57.32, "end": 60.36, "text": " whether their solutions are correct or not."}, {"start": 60.36, "end": 66.12, "text": " So, our question today is, can we improve the consistency of these AI's?"}, {"start": 66.12, "end": 71.2, "text": " Well, let's have a look at Google's new technique and put it to the real test."}, {"start": 71.2, "end": 72.64, "text": " And I mean it."}, {"start": 72.64, "end": 80.36, "text": " Yes, it is going to write a math test and so much more today and we will see how it performs."}, {"start": 80.36, "end": 82.76, "text": " Is it as smart as a human?"}, {"start": 82.76, "end": 84.24000000000001, "text": " Can that really be?"}, {"start": 84.24000000000001, "end": 86.56, "text": " We will find out today together."}, {"start": 86.56, "end": 92.92, "text": " And it turns out it has five amazing properties that I found really surprising."}, {"start": 92.92, "end": 93.88, "text": " Here goes."}, {"start": 93.88, "end": 101.72, "text": " One, it can solve math problems reliably and sometimes even in surprising ways."}, {"start": 101.72, "end": 110.04, "text": " Look, here it solved the problem in a way that is not only correct, but is different from the typical reference solution."}, {"start": 110.04, "end": 112.72, "text": " And the surprises don't stop there."}, {"start": 112.72, "end": 118.72, "text": " This one can even deal with questions that include drawings which is stunning."}, {"start": 118.72, "end": 126.03999999999999, "text": " Two, a proper AI today has to be relatively general and this means that it shouldn't only be good at math,"}, {"start": 126.03999999999999, "end": 128.92, "text": " it should be good at other subjects too."}, {"start": 128.92, "end": 138.12, "text": " And its solutions to these physics questions are not only correct, but clear, crisp and beautiful at the same time."}, {"start": 138.12, "end": 140.4, "text": " I absolutely love it."}, {"start": 140.4, "end": 145.72, "text": " So yes, it can do not only math, but physics too."}, {"start": 145.72, "end": 149.36, "text": " But three, I hope you know what's coming."}, {"start": 149.36, "end": 154.52, "text": " Oh yes, physics and math are not the only two subjects it can deal with."}, {"start": 154.52, "end": 157.12, "text": " The generality does not stop there."}, {"start": 157.12, "end": 163.32, "text": " This AI is a true polymath as we can ask it about excretory organs in biology"}, {"start": 163.32, "end": 173.44, "text": " or even electrical engineering, chemistry, astronomy and hold onto your papers because it even knows some machine learning."}, {"start": 173.44, "end": 177.2, "text": " Yes, a machine that learns about itself."}, {"start": 177.2, "end": 179.07999999999998, "text": " How cool is that?"}, {"start": 179.07999999999998, "end": 181.12, "text": " What a time to be alive."}, {"start": 181.12, "end": 188.76, "text": " Four, it's significantly outperforms previous techniques provided that we use the larger AI model."}, {"start": 188.76, "end": 195.6, "text": " Here is a beautiful example where we can prompt it with a question and the smaller model fails."}, {"start": 195.6, "end": 201.28, "text": " I wonder if the larger, more capable version of it would be able to get this right."}, {"start": 201.28, "end": 202.44, "text": " Does it?"}, {"start": 202.44, "end": 204.51999999999998, "text": " Yes, so good."}, {"start": 204.51999999999998, "end": 212.76, "text": " And five, here you see an example question from the 2022 Poland's National Math Exam."}, {"start": 212.76, "end": 219.28, "text": " And now we are finally at the point where we can ask the question, is it as smart as a human?"}, {"start": 219.28, "end": 226.67999999999998, "text": " Well, if you have been holding onto your paper so far, now squeeze that paper because the answer is yes."}, {"start": 226.67999999999998, "end": 227.56, "text": " Kind of."}, {"start": 227.56, "end": 235.56, "text": " It not only solved this question correctly, but adding everything up, it scored above the national average."}, {"start": 235.56, "end": 241.48, "text": " What's more, it can even solve some undergrad math problems straight from MIT."}, {"start": 241.48, "end": 248.6, "text": " So, yes, at least on this particular test, it is at the very least as smart as a human."}, {"start": 248.6, "end": 251.48, "text": " I am truly out of words."}, {"start": 251.48, "end": 259.48, "text": " This means that the consistency of these solutions is finally at a point where we could use this as an AI science assistant."}, {"start": 259.48, "end": 265.88, "text": " A junior assistant who messes up from time to time, mind you, but an assistant nonetheless."}, {"start": 265.88, "end": 268.08, "text": " What a time to be alive."}, {"start": 268.08, "end": 274.88, "text": " Interestingly, if we look under the hood, we see that it works by generating many candidate solutions"}, {"start": 274.88, "end": 278.47999999999996, "text": " and starts a vote between these solutions."}, {"start": 278.47999999999996, "end": 284.24, "text": " And the ones with the most votes end up being the solutions that you've seen throughout this video."}, {"start": 284.24, "end": 289.59999999999997, "text": " Now, not even this technique is perfect, you see some failure cases here."}, {"start": 289.59999999999997, "end": 295.44, "text": " But once again, this is so much improvement, just one paper down the line."}, {"start": 295.44, "end": 302.56, "text": " I wonder what happens if we apply the first law of papers which says that research is a process."}, {"start": 302.56, "end": 308.16, "text": " Do not look at where we are, look at where we will be two more papers down the line."}, {"start": 308.16, "end": 311.28, "text": " So, what do you think these will be capable of?"}, {"start": 311.28, "end": 313.12, "text": " One more paper down the line?"}, {"start": 313.12, "end": 314.88, "text": " Does this get your mind going?"}, {"start": 314.88, "end": 316.72, "text": " Let me know in the comments below."}, {"start": 316.72, "end": 320.15999999999997, "text": " This video has been supported by weights and biases."}, {"start": 320.16, "end": 326.72, "text": " Check out the recent offering, fully connected, a place where they bring machine learning practitioners together"}, {"start": 326.72, "end": 331.44, "text": " to share and discuss their ideas, learn from industry leaders,"}, {"start": 331.44, "end": 334.64000000000004, "text": " and even collaborate on projects together."}, {"start": 334.64000000000004, "end": 340.72, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by the series,"}, {"start": 340.72, "end": 343.84000000000003, "text": " but don't really know where to start."}, {"start": 343.84000000000003, "end": 345.44000000000005, "text": " And here it is."}, {"start": 345.44, "end": 351.2, "text": " Fully connected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 351.2, "end": 355.04, "text": " get your papers accepted to a conference, and more."}, {"start": 355.04, "end": 362.4, "text": " Make sure to visit them through wnb.me slash papers, or just click the link in the video description."}, {"start": 362.4, "end": 366.0, "text": " Our thanks to weights and biases for their longstanding support,"}, {"start": 366.0, "end": 368.8, "text": " and for helping us make better videos for you."}, {"start": 368.8, "end": 382.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6-FESfXHF5s
Microsoft's New AI: Virtual Humans Became Real! 🤯
❤️ Check out Runway and try it for free here: https://runwayml.com/papers/ 📝 The paper "3D Face Reconstruction with Dense Landmarks" is available here: https://microsoft.github.io/DenseLandmarks/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Chapters: 0:00 - Teaser 0:19 - Use virtual worlds! 0:39 Is that a good idea? 1:28 Does this really work? 1:51 Now 10 times more! 2:13 Previous method 2:35 New method 3:15 It gets better! 3:52 From simulation to reality 4:35 "Gloves" 5:07 How fast is it? 5:35 VS Apple's ARKit 6:25 Application to DeepFakes Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Today, we are going to see Microsoft's AI looking at a lot of people who don't exist, and then we will see that these virtual people can teach it something about real people. Now, through the power of computer graphics algorithms, we are able to create virtual worlds, and of course, within these virtual worlds, virtual humans too. So, here is a wacky idea. If we have all this virtual data, why not use these instead of real photos to train a new AI to do useful things with them? Hmm, wait a second. Maybe this idea is not so wacky after all. Especially because we can generate as many of these virtual humans as we wish, and all this data is perfectly annotated. The location and shape of the eyebrows is known even when they are occluded, and we know the depth and geometry of every single hair strand of the beard. If done well, there will be no issues about the identity of the subject or the distribution of the data. Also, we are not limited by our wardrobe or the environments we have access to. In this virtual world, we can do anything we wish. So good, but of course, here is the ultimate question that decides the fate of this project. And that question is, does this work? And what is all this good for? And the crazy thing is that Microsoft's previous AI technique could now identify facial landmarks of real people, but it has never seen a real person before. How cool is that? But this is a previous work, and now a new paper has emerged, and in this one, scientists at Microsoft said, how about more than 10 times more landmarks? Yes, this new paper promises no less than 700. When I saw this, I thought, are you kidding? Are we going 10x just one more paper down the line? Well, I will believe it when I see it. Let's see a different previous technique from just two years ago. You see that we have temporal consistency issues. In other words, there is plenty of flickering going on here, and there is one more problem. These facial expressions are giving it a hard time. Can we really expect any improvement over these two years? Well, hold on to your papers, and let's have a look at the new method, and see for ourselves. Look at that! It not only tracks a ton more landmarks, but the consistency of the results has improved a ton as well. So, it both solves a harder problem, and it also does it better than the previous technique. Wow! And all this just one more paper down the line. My goodness, I love it. I feel like this new method is the first that could even track Jim Carey himself. And we are not done yet, not even close. It gets even better. I was wondering if it still works in the presence of occlusions, for instance, whenever the face is covered by hair or clothing, or a flower. And let's see. It still works amazingly well. What about the colors? That is the other really cool thing. It can, for instance, tell us how confident it is in these predictions. Red means that the AI has to do more guesswork, often because of these occlusions. My other favorite thing is that this is still trained with synthetic data. In fact, it is key to its success. This is one of those success stories, where training an AI in a simulated world can be brought into the real world, and it still works spectacularly. There is a lot of factors that play here, so let's send a huge thank you to Computer Graphics Researchers as well for making this happen. These virtual characters could not be rendered and animated in real time, without decades of incredible graphics research works. Thank you. And now comes the ultimate question. How much do we have to wait for these results? This is incredibly important. Why? Well, here is a previous technique that was amazing at tracking our hand movements. Do you see these gloves? Yes, those are nut gloves. And this is how a previous method understands our hand motions, which is to say it can reconstruct them nearly perfectly. Standing work. However, these are typically used in virtual worlds, and we had to wait for nearly an hour for such reconstruction to happen. Do we have the same situation here? You know, 10 times better results in facial landmark detection? So, what is the price that we have to pay for this? One hour of waiting again? Well, not at all. If you have been holding onto your paper so far, now squeeze that paper, because it is not only real time, it is more than twice as fast as real time. It can turn out 150 frames per second, and it doesn't even require your graphics card, it runs on your processor. That is incredible. Here is one more comparison against the competitors. For instance, Apple's AR Kit runs on their own iPhones, they can make use of the additional depth information. That is a gold mine of information. But this new technique doesn't, it just takes color data, that is so much harder. But in return, it will run on any phone. Can these results compete with Apple's solution with less data? Let's have a look. My goodness, I love it. The results seem at the very least comparably good. That is, once again, amazing progress in just one paper. Also, what I am really excited about is that variants of this technique may also be able to improve the fidelity of these deepfakes out there. For instance, here is an example of me becoming a bunch of characters from Game of Thrones. This previous work was incredible, because it could even track where I was looking. Imagine a new generation of these tools that is able to track even more facial landmarks and democratize creating movies, games, and all kinds of virtual worlds. Yes, with some of these techniques, we can even become a painting or a virtual character as well, and even the movement of our nostrils would be transferred. What a time to be alive! So, does this get your mind going? What would you use this for? Let me know in the comments below. This episode has been supported by Ranway, Professional and Magical AI Video Editing for everyone. I often hear you follow scholars asking, OK, these AI techniques look great, but when do I get to use them? And the answer is, right now, Ranway is an amazing video editor that can do many of the things that you see here in this series. For instance, it can automatically replace the background behind the person. It can do in painting for videos amazingly well. It can keep track of objects as they move, and it also has B-Detection, Automatic subtitles, Noise Removal, You Name It. No wonder it is used by editors, post-production teams, and creators at companies like CBS, Google, Vox, and many other. Make sure to go to RanwayML.com, slash papers, sign up, and try it for free today. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 4.76, "end": 11.28, "text": " Today, we are going to see Microsoft's AI looking at a lot of people who don't exist,"}, {"start": 11.28, "end": 17.84, "text": " and then we will see that these virtual people can teach it something about real people."}, {"start": 17.84, "end": 23.84, "text": " Now, through the power of computer graphics algorithms, we are able to create virtual worlds,"}, {"start": 23.84, "end": 28.32, "text": " and of course, within these virtual worlds, virtual humans too."}, {"start": 28.32, "end": 31.12, "text": " So, here is a wacky idea."}, {"start": 31.12, "end": 36.32, "text": " If we have all this virtual data, why not use these instead of real photos"}, {"start": 36.32, "end": 40.32, "text": " to train a new AI to do useful things with them?"}, {"start": 40.32, "end": 42.88, "text": " Hmm, wait a second."}, {"start": 42.88, "end": 46.879999999999995, "text": " Maybe this idea is not so wacky after all."}, {"start": 46.879999999999995, "end": 52.0, "text": " Especially because we can generate as many of these virtual humans as we wish,"}, {"start": 52.0, "end": 55.760000000000005, "text": " and all this data is perfectly annotated."}, {"start": 55.76, "end": 61.28, "text": " The location and shape of the eyebrows is known even when they are occluded,"}, {"start": 61.28, "end": 66.72, "text": " and we know the depth and geometry of every single hair strand of the beard."}, {"start": 66.72, "end": 71.44, "text": " If done well, there will be no issues about the identity of the subject"}, {"start": 71.44, "end": 74.0, "text": " or the distribution of the data."}, {"start": 74.0, "end": 79.28, "text": " Also, we are not limited by our wardrobe or the environments we have access to."}, {"start": 79.28, "end": 83.2, "text": " In this virtual world, we can do anything we wish."}, {"start": 83.2, "end": 89.36, "text": " So good, but of course, here is the ultimate question that decides the fate of this project."}, {"start": 89.36, "end": 92.4, "text": " And that question is, does this work?"}, {"start": 92.4, "end": 95.2, "text": " And what is all this good for?"}, {"start": 95.2, "end": 100.72, "text": " And the crazy thing is that Microsoft's previous AI technique could now identify"}, {"start": 100.72, "end": 106.56, "text": " facial landmarks of real people, but it has never seen a real person before."}, {"start": 107.28, "end": 109.28, "text": " How cool is that?"}, {"start": 109.28, "end": 114.32000000000001, "text": " But this is a previous work, and now a new paper has emerged,"}, {"start": 114.32000000000001, "end": 117.76, "text": " and in this one, scientists at Microsoft said,"}, {"start": 117.76, "end": 121.52, "text": " how about more than 10 times more landmarks?"}, {"start": 122.24000000000001, "end": 127.28, "text": " Yes, this new paper promises no less than 700."}, {"start": 127.28, "end": 130.32, "text": " When I saw this, I thought, are you kidding?"}, {"start": 130.32, "end": 134.4, "text": " Are we going 10x just one more paper down the line?"}, {"start": 134.4, "end": 137.12, "text": " Well, I will believe it when I see it."}, {"start": 137.12, "end": 141.20000000000002, "text": " Let's see a different previous technique from just two years ago."}, {"start": 142.0, "end": 145.68, "text": " You see that we have temporal consistency issues."}, {"start": 145.68, "end": 149.36, "text": " In other words, there is plenty of flickering going on here,"}, {"start": 149.36, "end": 151.20000000000002, "text": " and there is one more problem."}, {"start": 151.84, "end": 154.96, "text": " These facial expressions are giving it a hard time."}, {"start": 155.68, "end": 159.28, "text": " Can we really expect any improvement over these two years?"}, {"start": 159.84, "end": 164.16, "text": " Well, hold on to your papers, and let's have a look at the new method,"}, {"start": 164.16, "end": 165.68, "text": " and see for ourselves."}, {"start": 165.68, "end": 167.68, "text": " Look at that!"}, {"start": 167.68, "end": 170.72, "text": " It not only tracks a ton more landmarks,"}, {"start": 170.72, "end": 174.64000000000001, "text": " but the consistency of the results has improved a ton as well."}, {"start": 174.64000000000001, "end": 178.56, "text": " So, it both solves a harder problem,"}, {"start": 178.56, "end": 181.68, "text": " and it also does it better than the previous technique."}, {"start": 181.68, "end": 182.64000000000001, "text": " Wow!"}, {"start": 182.64000000000001, "end": 186.56, "text": " And all this just one more paper down the line."}, {"start": 186.56, "end": 189.84, "text": " My goodness, I love it."}, {"start": 189.84, "end": 195.44, "text": " I feel like this new method is the first that could even track Jim Carey himself."}, {"start": 195.44, "end": 199.28, "text": " And we are not done yet, not even close."}, {"start": 199.28, "end": 201.2, "text": " It gets even better."}, {"start": 201.2, "end": 205.44, "text": " I was wondering if it still works in the presence of occlusions,"}, {"start": 205.44, "end": 211.28, "text": " for instance, whenever the face is covered by hair or clothing, or a flower."}, {"start": 211.28, "end": 213.68, "text": " And let's see."}, {"start": 214.88, "end": 216.96, "text": " It still works amazingly well."}, {"start": 216.96, "end": 218.64, "text": " What about the colors?"}, {"start": 218.64, "end": 221.76, "text": " That is the other really cool thing."}, {"start": 221.76, "end": 226.79999999999998, "text": " It can, for instance, tell us how confident it is in these predictions."}, {"start": 226.79999999999998, "end": 230.32, "text": " Red means that the AI has to do more guesswork,"}, {"start": 230.32, "end": 232.79999999999998, "text": " often because of these occlusions."}, {"start": 232.79999999999998, "end": 237.6, "text": " My other favorite thing is that this is still trained with synthetic data."}, {"start": 238.23999999999998, "end": 241.04, "text": " In fact, it is key to its success."}, {"start": 241.04, "end": 243.2, "text": " This is one of those success stories,"}, {"start": 243.2, "end": 248.16, "text": " where training an AI in a simulated world can be brought into the real world,"}, {"start": 248.16, "end": 251.12, "text": " and it still works spectacularly."}, {"start": 251.12, "end": 253.68, "text": " There is a lot of factors that play here,"}, {"start": 253.68, "end": 260.4, "text": " so let's send a huge thank you to Computer Graphics Researchers as well for making this happen."}, {"start": 260.4, "end": 264.88, "text": " These virtual characters could not be rendered and animated in real time,"}, {"start": 264.88, "end": 268.24, "text": " without decades of incredible graphics research works."}, {"start": 268.96, "end": 269.36, "text": " Thank you."}, {"start": 270.08, "end": 272.8, "text": " And now comes the ultimate question."}, {"start": 273.12, "end": 275.84000000000003, "text": " How much do we have to wait for these results?"}, {"start": 275.84000000000003, "end": 277.6, "text": " This is incredibly important."}, {"start": 278.32, "end": 279.2, "text": " Why?"}, {"start": 279.2, "end": 284.32, "text": " Well, here is a previous technique that was amazing at tracking our hand movements."}, {"start": 285.03999999999996, "end": 286.15999999999997, "text": " Do you see these gloves?"}, {"start": 286.8, "end": 289.76, "text": " Yes, those are nut gloves."}, {"start": 289.76, "end": 293.59999999999997, "text": " And this is how a previous method understands our hand motions,"}, {"start": 293.59999999999997, "end": 297.36, "text": " which is to say it can reconstruct them nearly perfectly."}, {"start": 298.15999999999997, "end": 298.96, "text": " Standing work."}, {"start": 299.52, "end": 303.03999999999996, "text": " However, these are typically used in virtual worlds,"}, {"start": 303.03999999999996, "end": 307.44, "text": " and we had to wait for nearly an hour for such reconstruction to happen."}, {"start": 307.44, "end": 310.16, "text": " Do we have the same situation here?"}, {"start": 310.16, "end": 314.4, "text": " You know, 10 times better results in facial landmark detection?"}, {"start": 314.4, "end": 317.68, "text": " So, what is the price that we have to pay for this?"}, {"start": 317.68, "end": 319.68, "text": " One hour of waiting again?"}, {"start": 319.68, "end": 321.36, "text": " Well, not at all."}, {"start": 321.36, "end": 324.0, "text": " If you have been holding onto your paper so far,"}, {"start": 324.0, "end": 326.08, "text": " now squeeze that paper,"}, {"start": 326.08, "end": 328.48, "text": " because it is not only real time,"}, {"start": 328.48, "end": 332.32, "text": " it is more than twice as fast as real time."}, {"start": 332.32, "end": 336.24, "text": " It can turn out 150 frames per second,"}, {"start": 336.24, "end": 339.84000000000003, "text": " and it doesn't even require your graphics card,"}, {"start": 339.84000000000003, "end": 341.6, "text": " it runs on your processor."}, {"start": 342.24, "end": 343.6, "text": " That is incredible."}, {"start": 344.24, "end": 347.04, "text": " Here is one more comparison against the competitors."}, {"start": 347.76, "end": 352.08, "text": " For instance, Apple's AR Kit runs on their own iPhones,"}, {"start": 352.08, "end": 355.2, "text": " they can make use of the additional depth information."}, {"start": 355.68, "end": 358.24, "text": " That is a gold mine of information."}, {"start": 358.24, "end": 362.48, "text": " But this new technique doesn't, it just takes color data,"}, {"start": 362.96000000000004, "end": 364.96000000000004, "text": " that is so much harder."}, {"start": 364.96, "end": 368.56, "text": " But in return, it will run on any phone."}, {"start": 368.56, "end": 372.47999999999996, "text": " Can these results compete with Apple's solution with less data?"}, {"start": 373.12, "end": 373.91999999999996, "text": " Let's have a look."}, {"start": 375.12, "end": 377.84, "text": " My goodness, I love it."}, {"start": 377.84, "end": 381.44, "text": " The results seem at the very least comparably good."}, {"start": 381.91999999999996, "end": 386.32, "text": " That is, once again, amazing progress in just one paper."}, {"start": 386.96, "end": 391.28, "text": " Also, what I am really excited about is that variants of this technique"}, {"start": 391.28, "end": 395.76, "text": " may also be able to improve the fidelity of these deepfakes out there."}, {"start": 395.76, "end": 400.15999999999997, "text": " For instance, here is an example of me becoming a bunch of characters"}, {"start": 400.15999999999997, "end": 401.52, "text": " from Game of Thrones."}, {"start": 401.52, "end": 404.15999999999997, "text": " This previous work was incredible,"}, {"start": 404.15999999999997, "end": 407.84, "text": " because it could even track where I was looking."}, {"start": 407.84, "end": 410.47999999999996, "text": " Imagine a new generation of these tools"}, {"start": 410.47999999999996, "end": 413.59999999999997, "text": " that is able to track even more facial landmarks"}, {"start": 413.59999999999997, "end": 419.11999999999995, "text": " and democratize creating movies, games, and all kinds of virtual worlds."}, {"start": 419.12, "end": 423.68, "text": " Yes, with some of these techniques, we can even become a painting"}, {"start": 423.68, "end": 425.92, "text": " or a virtual character as well,"}, {"start": 425.92, "end": 429.52, "text": " and even the movement of our nostrils would be transferred."}, {"start": 429.52, "end": 431.36, "text": " What a time to be alive!"}, {"start": 431.36, "end": 433.84000000000003, "text": " So, does this get your mind going?"}, {"start": 433.84000000000003, "end": 435.84000000000003, "text": " What would you use this for?"}, {"start": 435.84000000000003, "end": 437.68, "text": " Let me know in the comments below."}, {"start": 437.68, "end": 440.8, "text": " This episode has been supported by Ranway,"}, {"start": 440.8, "end": 445.36, "text": " Professional and Magical AI Video Editing for everyone."}, {"start": 445.36, "end": 448.24, "text": " I often hear you follow scholars asking,"}, {"start": 448.24, "end": 451.6, "text": " OK, these AI techniques look great,"}, {"start": 451.6, "end": 454.08, "text": " but when do I get to use them?"}, {"start": 454.08, "end": 456.48, "text": " And the answer is, right now,"}, {"start": 456.48, "end": 458.88, "text": " Ranway is an amazing video editor"}, {"start": 458.88, "end": 463.28000000000003, "text": " that can do many of the things that you see here in this series."}, {"start": 463.28000000000003, "end": 468.40000000000003, "text": " For instance, it can automatically replace the background behind the person."}, {"start": 468.40000000000003, "end": 472.64, "text": " It can do in painting for videos amazingly well."}, {"start": 472.64, "end": 475.28000000000003, "text": " It can keep track of objects as they move,"}, {"start": 475.28, "end": 478.0, "text": " and it also has B-Detection,"}, {"start": 478.0, "end": 482.15999999999997, "text": " Automatic subtitles, Noise Removal, You Name It."}, {"start": 482.15999999999997, "end": 486.0, "text": " No wonder it is used by editors, post-production teams,"}, {"start": 486.0, "end": 491.84, "text": " and creators at companies like CBS, Google, Vox, and many other."}, {"start": 491.84, "end": 494.79999999999995, "text": " Make sure to go to RanwayML.com,"}, {"start": 494.79999999999995, "end": 498.96, "text": " slash papers, sign up, and try it for free today."}, {"start": 498.96, "end": 501.2, "text": " Thanks for watching and for your generous support,"}, {"start": 501.2, "end": 505.92, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7iy0WJwNmv4
Google’s New AI Learned To See In The Dark! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images" is available here: https://bmild.github.io/rawnerf/index.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejona Ifeher. Today, we are going to take a collection of photos like these and magically create a video where we can fly through these photos. And we are going to do all this with a twist. So, how is this even possible? Especially that the input is only a handful of photos. Well, typically we give it to a learning algorithm and ask it to synthesize a photorealistic video where we fly through the scene as we please. Of course, that sounds impassable. Especially that some information is given about the scene, but this is really not much. And as you see, this is not impassable at all. For the power of learning-based techniques, this previous AI is already capable of pulling off this amazing trick. And today, I am going to show you something even more incredible. Did you notice that most of these were shot during the daytime and these are all well-lit images? Every single one of them. So our question today is, can we perform view synthesis in the dark? And my initial answer would be a resounding no. Why? Well, in this case, we not only have to deal with less detail in the images, it would also be very difficult to stitch new views together if we have images like this one. Luckily, we have a choice. Instead, we can try something else and that is using raw sensor data instead. It looks like this. We get more detail, but uh-oh. Now we also have a problem. Do you see the problem here? Yes, that's right. In the raw sensor data, we have more detail, but also much more noise that contaminates this data too. So we either have to choose from less detail and less noise or from more detail more noise. So I guess that means that we get no view synthesis in the dark, right? Well, don't despair, not everything is lost yet. There are image denoising techniques that we can reach out to. Let's see if this gets any better. Hmm, it definitely got a lot better, but I have to be honest. This is not even close to the quality we need for view synthesis. And note that this one denoises a single image. A single image? Yes, finally, there is an opening here. Remember, in the world of nerves, we are not using a single image, we are using a package of images. A package contains much more information than just one image and hopefully it can be denoised better. So, this is what the previous method could do and now hold onto your papers and let's look at this new technique called Ronerf. Can it pull this off? Wow, seemingly it can. So now can we be greedy and hope that view synthesis works on this data? Let's see. My goodness, it really does. And we are not done yet, in fact we are just getting started. It can do even more. For instance, it can perform tone mapping on the underlying data to bring out even more detail from these dark images and here comes my favourite. Oh yes, we can also refocus these images and this highly sought after depth of field effects will start to appear. I love it. And what I love even more is that we can even play with this in real time to refocus the scene. This is a very impressive set of features, so let's take it out for a spin and marvel together at five amazing examples of what it can do. Yes, once again this is extremely noisy. For instance, can you read this street sign? Not a chance, right? And what about now? This looks like magic. I love it. Now let's start the view synthesis part and this looks really good given the noisy inputs. The previous original nerve technique could only produce this and this is not some ancient technique. Uh-uh. No sir, this is from just two years ago and today a couple of papers down the line and we get this. I can't believe it. We can even see the specular highlight moving around the edge of the car here. Outstanding. Two, actually let's have a closer look at specular highlights. Here is a noisy image, the denoise version and the view synthesis and the specular highlights are once again excellent. These are very difficult to capture because they change a great deal as we move the camera around and the photos are spaced out relatively far from each other. This means a huge challenge for the learning algorithm and as you see this one passes with flying colors. Three thin structures are always a problem. Look, an otherwise excellent previous technique had a great deal of trouble with the fence here even in a well-lit scene. So let's see, are you kidding me? doing the same with a bunch of nighttime photos there is not a chance that this will work. So let's see together. Look at that. I am out of words. Or you know what's even better, let's be really picky and look here instead, these areas are even more challenging and even these work really well. Such improvement in so little time. For as I am a light transport researcher by trade, I would love to look at it, resolve some more challenging specular highlights. For instance, you can see how the road reflects the streetlights here and the result looks not just passable. This looks flat out gorgeous. Now talking about gorgeous scenes, let's look at some more of those. Five putting it all together. This will be a stress test for the new technique. Let's change the viewpoint, refocus the scene and play with the exposure at the same time. That is incredible. What a time to be alive. And you are saying that it does all this from a collection of 25 to 200 photos. Today we can shoot these in seconds. Clearly, not even this technique is perfect, we can see that this does not match reality exactly, but going from a set of extremely noisy raw images to this is truly a sight to behold. The previous two-year-old technique couldn't even get close to these results. Bravo! And this is an excellent place for us to apply the first law of papers which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. So, what do you think? What will we be able to do, two more papers down the line? And what would you use this for? Let me know in the comments below. Waits and biases provides tools to track your experiments in your deep learning projects. What you see here is their artifacts feature which speeds up the most common machine learning steps like uploading raw data, splitting it into training, validation and test sets, and of course, the best part starting to train a neural network. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that waits and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to waits and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejona Ifeher."}, {"start": 4.76, "end": 12.620000000000001, "text": " Today, we are going to take a collection of photos like these and magically create a video"}, {"start": 12.620000000000001, "end": 15.84, "text": " where we can fly through these photos."}, {"start": 15.84, "end": 20.16, "text": " And we are going to do all this with a twist."}, {"start": 20.16, "end": 23.1, "text": " So, how is this even possible?"}, {"start": 23.1, "end": 27.84, "text": " Especially that the input is only a handful of photos."}, {"start": 27.84, "end": 34.980000000000004, "text": " Well, typically we give it to a learning algorithm and ask it to synthesize a photorealistic"}, {"start": 34.980000000000004, "end": 39.480000000000004, "text": " video where we fly through the scene as we please."}, {"start": 39.480000000000004, "end": 42.16, "text": " Of course, that sounds impassable."}, {"start": 42.16, "end": 49.0, "text": " Especially that some information is given about the scene, but this is really not much."}, {"start": 49.0, "end": 52.96, "text": " And as you see, this is not impassable at all."}, {"start": 52.96, "end": 58.8, "text": " For the power of learning-based techniques, this previous AI is already capable of pulling"}, {"start": 58.8, "end": 61.2, "text": " off this amazing trick."}, {"start": 61.2, "end": 65.88, "text": " And today, I am going to show you something even more incredible."}, {"start": 65.88, "end": 71.72, "text": " Did you notice that most of these were shot during the daytime and these are all well-lit"}, {"start": 71.72, "end": 73.24000000000001, "text": " images?"}, {"start": 73.24000000000001, "end": 75.08, "text": " Every single one of them."}, {"start": 75.08, "end": 81.64, "text": " So our question today is, can we perform view synthesis in the dark?"}, {"start": 81.64, "end": 85.4, "text": " And my initial answer would be a resounding no."}, {"start": 85.4, "end": 86.4, "text": " Why?"}, {"start": 86.4, "end": 92.72, "text": " Well, in this case, we not only have to deal with less detail in the images, it would also"}, {"start": 92.72, "end": 98.68, "text": " be very difficult to stitch new views together if we have images like this one."}, {"start": 98.68, "end": 101.44, "text": " Luckily, we have a choice."}, {"start": 101.44, "end": 108.56, "text": " Instead, we can try something else and that is using raw sensor data instead."}, {"start": 108.56, "end": 110.4, "text": " It looks like this."}, {"start": 110.4, "end": 113.56, "text": " We get more detail, but uh-oh."}, {"start": 113.56, "end": 116.32000000000001, "text": " Now we also have a problem."}, {"start": 116.32000000000001, "end": 118.12, "text": " Do you see the problem here?"}, {"start": 118.12, "end": 120.24000000000001, "text": " Yes, that's right."}, {"start": 120.24000000000001, "end": 126.36000000000001, "text": " In the raw sensor data, we have more detail, but also much more noise that contaminates"}, {"start": 126.36000000000001, "end": 128.16, "text": " this data too."}, {"start": 128.16, "end": 135.44, "text": " So we either have to choose from less detail and less noise or from more detail more noise."}, {"start": 135.44, "end": 141.16, "text": " So I guess that means that we get no view synthesis in the dark, right?"}, {"start": 141.16, "end": 145.32, "text": " Well, don't despair, not everything is lost yet."}, {"start": 145.32, "end": 149.96, "text": " There are image denoising techniques that we can reach out to."}, {"start": 149.96, "end": 153.4, "text": " Let's see if this gets any better."}, {"start": 153.4, "end": 158.68, "text": " Hmm, it definitely got a lot better, but I have to be honest."}, {"start": 158.68, "end": 164.16, "text": " This is not even close to the quality we need for view synthesis."}, {"start": 164.16, "end": 168.76, "text": " And note that this one denoises a single image."}, {"start": 168.76, "end": 170.12, "text": " A single image?"}, {"start": 170.12, "end": 173.8, "text": " Yes, finally, there is an opening here."}, {"start": 173.8, "end": 179.68, "text": " Remember, in the world of nerves, we are not using a single image, we are using a package"}, {"start": 179.68, "end": 180.68, "text": " of images."}, {"start": 180.68, "end": 187.64, "text": " A package contains much more information than just one image and hopefully it can be denoised"}, {"start": 187.64, "end": 188.64, "text": " better."}, {"start": 188.64, "end": 194.95999999999998, "text": " So, this is what the previous method could do and now hold onto your papers and let's"}, {"start": 194.95999999999998, "end": 198.64, "text": " look at this new technique called Ronerf."}, {"start": 198.64, "end": 200.27999999999997, "text": " Can it pull this off?"}, {"start": 200.27999999999997, "end": 203.88, "text": " Wow, seemingly it can."}, {"start": 203.88, "end": 210.0, "text": " So now can we be greedy and hope that view synthesis works on this data?"}, {"start": 210.0, "end": 211.64, "text": " Let's see."}, {"start": 211.64, "end": 214.48, "text": " My goodness, it really does."}, {"start": 214.48, "end": 219.16, "text": " And we are not done yet, in fact we are just getting started."}, {"start": 219.16, "end": 221.2, "text": " It can do even more."}, {"start": 221.2, "end": 226.92, "text": " For instance, it can perform tone mapping on the underlying data to bring out even more"}, {"start": 226.92, "end": 231.48, "text": " detail from these dark images and here comes my favourite."}, {"start": 231.48, "end": 238.56, "text": " Oh yes, we can also refocus these images and this highly sought after depth of field effects"}, {"start": 238.56, "end": 240.16, "text": " will start to appear."}, {"start": 240.16, "end": 242.39999999999998, "text": " I love it."}, {"start": 242.4, "end": 247.92000000000002, "text": " And what I love even more is that we can even play with this in real time to refocus the"}, {"start": 247.92000000000002, "end": 249.16, "text": " scene."}, {"start": 249.16, "end": 254.72, "text": " This is a very impressive set of features, so let's take it out for a spin and marvel"}, {"start": 254.72, "end": 259.0, "text": " together at five amazing examples of what it can do."}, {"start": 259.0, "end": 263.04, "text": " Yes, once again this is extremely noisy."}, {"start": 263.04, "end": 266.28000000000003, "text": " For instance, can you read this street sign?"}, {"start": 266.28000000000003, "end": 268.24, "text": " Not a chance, right?"}, {"start": 268.24, "end": 270.64, "text": " And what about now?"}, {"start": 270.64, "end": 272.64, "text": " This looks like magic."}, {"start": 272.64, "end": 274.15999999999997, "text": " I love it."}, {"start": 274.15999999999997, "end": 280.88, "text": " Now let's start the view synthesis part and this looks really good given the noisy inputs."}, {"start": 280.88, "end": 287.44, "text": " The previous original nerve technique could only produce this and this is not some ancient"}, {"start": 287.44, "end": 288.44, "text": " technique."}, {"start": 288.44, "end": 289.44, "text": " Uh-uh."}, {"start": 289.44, "end": 295.88, "text": " No sir, this is from just two years ago and today a couple of papers down the line and"}, {"start": 295.88, "end": 297.68, "text": " we get this."}, {"start": 297.68, "end": 299.68, "text": " I can't believe it."}, {"start": 299.68, "end": 305.6, "text": " We can even see the specular highlight moving around the edge of the car here."}, {"start": 305.6, "end": 306.6, "text": " Outstanding."}, {"start": 306.6, "end": 311.40000000000003, "text": " Two, actually let's have a closer look at specular highlights."}, {"start": 311.40000000000003, "end": 318.52, "text": " Here is a noisy image, the denoise version and the view synthesis and the specular highlights"}, {"start": 318.52, "end": 321.8, "text": " are once again excellent."}, {"start": 321.8, "end": 327.4, "text": " These are very difficult to capture because they change a great deal as we move the camera"}, {"start": 327.4, "end": 333.2, "text": " around and the photos are spaced out relatively far from each other."}, {"start": 333.2, "end": 338.52, "text": " This means a huge challenge for the learning algorithm and as you see this one passes with"}, {"start": 338.52, "end": 340.15999999999997, "text": " flying colors."}, {"start": 340.15999999999997, "end": 343.79999999999995, "text": " Three thin structures are always a problem."}, {"start": 343.79999999999995, "end": 349.52, "text": " Look, an otherwise excellent previous technique had a great deal of trouble with the fence"}, {"start": 349.52, "end": 352.88, "text": " here even in a well-lit scene."}, {"start": 352.88, "end": 359.6, "text": " So let's see, are you kidding me? doing the same with a bunch of nighttime photos there"}, {"start": 359.6, "end": 362.15999999999997, "text": " is not a chance that this will work."}, {"start": 362.15999999999997, "end": 364.92, "text": " So let's see together."}, {"start": 364.92, "end": 366.4, "text": " Look at that."}, {"start": 366.4, "end": 368.8, "text": " I am out of words."}, {"start": 368.8, "end": 374.4, "text": " Or you know what's even better, let's be really picky and look here instead, these"}, {"start": 374.4, "end": 379.88, "text": " areas are even more challenging and even these work really well."}, {"start": 379.88, "end": 383.15999999999997, "text": " Such improvement in so little time."}, {"start": 383.15999999999997, "end": 388.48, "text": " For as I am a light transport researcher by trade, I would love to look at it, resolve"}, {"start": 388.48, "end": 391.71999999999997, "text": " some more challenging specular highlights."}, {"start": 391.71999999999997, "end": 397.2, "text": " For instance, you can see how the road reflects the streetlights here and the result looks"}, {"start": 397.2, "end": 399.24, "text": " not just passable."}, {"start": 399.24, "end": 401.71999999999997, "text": " This looks flat out gorgeous."}, {"start": 401.71999999999997, "end": 406.36, "text": " Now talking about gorgeous scenes, let's look at some more of those."}, {"start": 406.36, "end": 409.0, "text": " Five putting it all together."}, {"start": 409.0, "end": 412.12, "text": " This will be a stress test for the new technique."}, {"start": 412.12, "end": 419.76, "text": " Let's change the viewpoint, refocus the scene and play with the exposure at the same time."}, {"start": 419.76, "end": 421.76, "text": " That is incredible."}, {"start": 421.76, "end": 423.96, "text": " What a time to be alive."}, {"start": 423.96, "end": 430.84, "text": " And you are saying that it does all this from a collection of 25 to 200 photos."}, {"start": 430.84, "end": 433.6, "text": " Today we can shoot these in seconds."}, {"start": 433.6, "end": 439.16, "text": " Clearly, not even this technique is perfect, we can see that this does not match reality"}, {"start": 439.16, "end": 446.6, "text": " exactly, but going from a set of extremely noisy raw images to this is truly a sight"}, {"start": 446.6, "end": 447.76000000000005, "text": " to behold."}, {"start": 447.76000000000005, "end": 452.64000000000004, "text": " The previous two-year-old technique couldn't even get close to these results."}, {"start": 452.64000000000004, "end": 453.64000000000004, "text": " Bravo!"}, {"start": 453.64000000000004, "end": 460.32000000000005, "text": " And this is an excellent place for us to apply the first law of papers which says that research"}, {"start": 460.32000000000005, "end": 461.32000000000005, "text": " is a process."}, {"start": 461.32, "end": 467.08, "text": " Do not look at where we are, look at where we will be, two more papers down the line."}, {"start": 467.08, "end": 469.08, "text": " So, what do you think?"}, {"start": 469.08, "end": 473.24, "text": " What will we be able to do, two more papers down the line?"}, {"start": 473.24, "end": 475.56, "text": " And what would you use this for?"}, {"start": 475.56, "end": 477.32, "text": " Let me know in the comments below."}, {"start": 477.32, "end": 482.76, "text": " Waits and biases provides tools to track your experiments in your deep learning projects."}, {"start": 482.76, "end": 488.03999999999996, "text": " What you see here is their artifacts feature which speeds up the most common machine learning"}, {"start": 488.04, "end": 495.48, "text": " steps like uploading raw data, splitting it into training, validation and test sets, and"}, {"start": 495.48, "end": 500.28000000000003, "text": " of course, the best part starting to train a neural network."}, {"start": 500.28000000000003, "end": 507.24, "text": " It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more."}, {"start": 507.24, "end": 513.32, "text": " And the best part is that waits and biases is free for all individuals, academics and"}, {"start": 513.32, "end": 515.04, "text": " open source projects."}, {"start": 515.04, "end": 521.24, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 521.24, "end": 524.9599999999999, "text": " description and you can get a free demo today."}, {"start": 524.9599999999999, "end": 530.56, "text": " Our thanks to waits and biases for their long-standing support and for helping us make better"}, {"start": 530.56, "end": 531.56, "text": " videos for you."}, {"start": 531.56, "end": 561.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=JkUF40kPV4M
Samsung’s AI: Megapixel DeepFakes! 📷
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "MegaPortraits: One-shot Megapixel Neural Head Avatars" is available here: https://samsunglabs.github.io/MegaPortraits/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to make the Mona Lisa and other fictional characters come to life through this deep fake technology that is able to create more detailed videos than previous techniques. So what are deep fakes? Simple, we record a video of ourselves as you see me over here doing just that, and with this we can make a different person speak. What you see here is my favorite deep fake application where I became these movie characters and tried to make them come alive. Ideally, expressions, eye movements and other gestures are also transferred. Some of them even work for animation movie characters too. Absolutely lovely. This can and will create filmmaking much more accessible for all of us, which I absolutely love. Also, when it comes to real people, just imagine how cool it would be to create a more smiling version of an actor without actually having to re-record the scene, or maybe even make legendary actors who are not with us anymore appear in a new movie. The band Eba is already doing these virtual concerts where they appear as their younger selves, which is the true triumph of technology. What a time to be alive! Now, if you look at these results, these are absolutely amazing, but the resolution of the outputs is not the greatest. They don't seem to have enough fine details to look so convincing. And here is where this new paper comes into play. Now hold onto your papers and have a look at this. Wow! Oh my! Yes, you are seeing correctly, the authors claim that these are the first one megapixel resolution defects out there. This means that they showcase considerably more detail than the previous ones. I love these. The amount of detail compared to previous works is simply stunning. But as you see, these are not perfect by any means. There is a price to be paid for these results. And the price is that we need to trade some temporal coherence for these extra pixels. What does that mean? It means that as we move from one image to the next one, the algorithm does not have a perfect memory of what it had done before, and therefore some of these jarring artifacts will appear. Now, you are all experienced fellow scholars, so here you know that we have to invoke the first law of papers which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. So, megapixel results, huh? How long do we have to wait for these? Well, if you have been holding onto your papers so far, now squeeze that paper because these somewhat reduced examples work in real time. Yes, this means that we can point a camera at ourselves, do our thing, and see ourselves become these people on our screens in real time. Now, I love the film directing aspect of these videos, and also I am super excited for us to be able to even put ourselves into a virtual world. And clearly, there are also people who are less interested in these amazing applications with these techniques. To make sure that we are prepared and everyone knows about this, I am trying my best to teach political decision makers from all around the world to make sure that they can make the best decisions for all of us. And I do this free of charge. And as any self-respecting fellow scholar would do, you can see me holding onto my papers here at a NATO conference. At such a conference, I often tell these political decision makers that defect detectors also exist, how reliable they are, what the key issues are, and more. So, what do you think? What actor or musician would you like to see revived this way? Prince or Robin Williams, anyone? Let me know in the comments below. If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold onto your papers because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So, join researchers at organizations like Apple, MIT, and Caltech in using Lambda cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com, slash papers, to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 11.76, "text": " Today we are going to make the Mona Lisa and other fictional characters come to life through"}, {"start": 11.76, "end": 17.6, "text": " this deep fake technology that is able to create more detailed videos than previous techniques."}, {"start": 18.400000000000002, "end": 20.400000000000002, "text": " So what are deep fakes?"}, {"start": 21.28, "end": 27.52, "text": " Simple, we record a video of ourselves as you see me over here doing just that,"}, {"start": 27.52, "end": 34.4, "text": " and with this we can make a different person speak. What you see here is my favorite deep fake"}, {"start": 34.4, "end": 41.76, "text": " application where I became these movie characters and tried to make them come alive. Ideally,"}, {"start": 41.76, "end": 49.2, "text": " expressions, eye movements and other gestures are also transferred. Some of them even work for"}, {"start": 49.2, "end": 56.8, "text": " animation movie characters too. Absolutely lovely. This can and will create filmmaking"}, {"start": 56.8, "end": 64.0, "text": " much more accessible for all of us, which I absolutely love. Also, when it comes to real people,"}, {"start": 64.0, "end": 71.36, "text": " just imagine how cool it would be to create a more smiling version of an actor without actually"}, {"start": 71.36, "end": 79.12, "text": " having to re-record the scene, or maybe even make legendary actors who are not with us anymore"}, {"start": 79.12, "end": 86.08, "text": " appear in a new movie. The band Eba is already doing these virtual concerts where they appear"}, {"start": 86.08, "end": 92.32, "text": " as their younger selves, which is the true triumph of technology. What a time to be alive!"}, {"start": 92.88, "end": 100.16, "text": " Now, if you look at these results, these are absolutely amazing, but the resolution of the outputs"}, {"start": 100.16, "end": 105.68, "text": " is not the greatest. They don't seem to have enough fine details to look so convincing."}, {"start": 106.4, "end": 114.24, "text": " And here is where this new paper comes into play. Now hold onto your papers and have a look at this."}, {"start": 114.24, "end": 122.24, "text": " Wow! Oh my! Yes, you are seeing correctly, the authors claim that these are the first"}, {"start": 122.24, "end": 129.6, "text": " one megapixel resolution defects out there. This means that they showcase considerably more detail"}, {"start": 129.6, "end": 136.79999999999998, "text": " than the previous ones. I love these. The amount of detail compared to previous works is simply"}, {"start": 136.79999999999998, "end": 144.16, "text": " stunning. But as you see, these are not perfect by any means. There is a price to be paid for"}, {"start": 144.16, "end": 150.88, "text": " these results. And the price is that we need to trade some temporal coherence for these extra pixels."}, {"start": 151.44, "end": 158.0, "text": " What does that mean? It means that as we move from one image to the next one, the algorithm does"}, {"start": 158.0, "end": 164.24, "text": " not have a perfect memory of what it had done before, and therefore some of these jarring artifacts"}, {"start": 164.24, "end": 170.56, "text": " will appear. Now, you are all experienced fellow scholars, so here you know that we have to"}, {"start": 170.56, "end": 177.6, "text": " invoke the first law of papers which says that research is a process. Do not look at where we are,"}, {"start": 177.6, "end": 185.04, "text": " look at where we will be, two more papers down the line. So, megapixel results, huh? How long do"}, {"start": 185.04, "end": 191.28, "text": " we have to wait for these? Well, if you have been holding onto your papers so far, now squeeze that"}, {"start": 191.28, "end": 199.6, "text": " paper because these somewhat reduced examples work in real time. Yes, this means that we can point"}, {"start": 199.6, "end": 207.28, "text": " a camera at ourselves, do our thing, and see ourselves become these people on our screens in real time."}, {"start": 207.92, "end": 214.48, "text": " Now, I love the film directing aspect of these videos, and also I am super excited for us to be"}, {"start": 214.48, "end": 222.0, "text": " able to even put ourselves into a virtual world. And clearly, there are also people who are less"}, {"start": 222.0, "end": 227.76, "text": " interested in these amazing applications with these techniques. To make sure that we are prepared"}, {"start": 227.76, "end": 233.6, "text": " and everyone knows about this, I am trying my best to teach political decision makers from all"}, {"start": 233.6, "end": 240.07999999999998, "text": " around the world to make sure that they can make the best decisions for all of us. And I do this"}, {"start": 240.07999999999998, "end": 246.48, "text": " free of charge. And as any self-respecting fellow scholar would do, you can see me holding onto"}, {"start": 246.48, "end": 252.95999999999998, "text": " my papers here at a NATO conference. At such a conference, I often tell these political decision"}, {"start": 252.96, "end": 260.24, "text": " makers that defect detectors also exist, how reliable they are, what the key issues are, and more."}, {"start": 260.96000000000004, "end": 266.72, "text": " So, what do you think? What actor or musician would you like to see revived this way?"}, {"start": 267.36, "end": 272.72, "text": " Prince or Robin Williams, anyone? Let me know in the comments below. If you are looking for"}, {"start": 272.72, "end": 282.0, "text": " inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute."}, {"start": 282.0, "end": 288.96, "text": " No commitments or negotiation required. Just sign up and launch an instance. And hold onto"}, {"start": 288.96, "end": 298.24, "text": " your papers because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour"}, {"start": 298.24, "end": 307.68, "text": " versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent"}, {"start": 307.68, "end": 315.44, "text": " storage? So, join researchers at organizations like Apple, MIT, and Caltech in using Lambda"}, {"start": 315.44, "end": 322.8, "text": " cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com, slash papers,"}, {"start": 322.8, "end": 328.56, "text": " to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous"}, {"start": 328.56, "end": 338.56, "text": " support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fJn9B64Znrk
OpenAI’s New AI Learned To Play Minecraft! ⛏
❤️ Come work for Weights & Biases! Check out open roles at https://wandb.me/jobs ❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The paper "Learning to Play Minecraft with Video PreTraining (VPT)" is available here: https://openai.com/blog/vpt/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 An AI playing Minecraft? 0:22 NVIDIA's AI solution 1:22 Exploring underwater (NVIDIA) 1:39 Building a fence (NVIDIA) 1:45 "Fighting" an Ender Dragon (NVIDIA) 2:06 My wish is granted! 2:40 The OpenAI project 2:56 Building a wooden shelter (OpenAI) 3:19 What? A Diamond pickaxe? 3:34 Building the Diamond pickaxe (OpenAI) 4:24 Pillar jumping (OpenAI) 4:55 How does it learn? 5:24 Not just for games! 6:20 First Law of Papers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #minecraft #openai
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jone Fahir. Today we are going to see what happens if we unleash an AI to watch Minecraft videos and then ask this AI to play it. And the big question is, can it play as well as we humans do? Well, you'll get your answer today. This is OpenAI's project, which is not to be confused with Nvidia's similar project called Mindodjo. So, what was that? That was an AI project that watched a ton of YouTube tutorial videos on Minecraft. Minecraft is an open world sandbox game, one of the most played games in the world, and this is a true sandbox, which means that you can do whatever you wish. You create the story of this game. Gather raw materials, build something, explore, or don't. It's all up to you. This is a sandbox game and your story after all. So what could Nvidia's AI do after watching these tutorial videos? Well, a great deal. For instance, we could even use natural language and ask it to explore an ocean monument. I really like this one because the word explore is sufficiently vague and open-ended. A human understands what exploring means, but does the AI understand it too? Well, as you see, it does. Amazing. It could also encircle these llamas with offense. Or it could also be dropped into an arena with an under-dragon, the final boss of the game, if you will. However, look, this is a very limited setting for this task as the dragon is not charging at the AI, but at the very least Nvidia's AI understood what fighting meant in this game. Now in this video, I set the following, quoting, starting this game from scratch and building up everything to defeat such a dragon takes a long time horizon and will be an excellent benchmark for the next AI one more paper down the line. I'd love to see it perform this, start to end. Make sure to subscribe if such a paper appears I'll be here to show it to you. End quote. And I thought maybe a year or two later there will be some activity here. And now hold onto your papers because open AI scientists are here with their own spin of this project not one year later, but just one week later. Wow. And with this paper, I also got what I was looking for, which is an AI that understands longer time horizons. You see, they make a huge claim. They say that this is the first AI that is capable of crafting diamond tools. That is very ambitious. Can that really be? Well, I say that I will believe it when I see it. Do you see this long sequence of sub-tasks? We need to do all this correctly and in the correct order to be able to perform that. This takes up to 24,000 actions and 20 minutes even for a human player. So let's see if the AI is also up for the task. It starts chopping wood right away. Then the planks are coming along. Later it starts mining, finds iron ore, crafts, a furnace, starts smelting, and at this point I am starting to think that it might make it. That would be absolutely amazing. However, not so fast, we still need some diamonds. Can it find some diamonds? It takes a while and oh my, got them. And from then on we have established that this is a smart AI, so from here on out this is a piece of cake. There we go. I can't believe it. A diamond pickaxe. And here is my other favorite. It can also perform pillar jumps. What is that? This is a cool technique where we jump and while me there we quickly put a block below us and ta-da! We are now standing a little higher. If we do this over and over again we will be able to reach places that otherwise would be way out of reach. And the AI learned this too. How cool is that? And while we are looking at what else it is capable of, I would like to bring up one more remarkable thing here. Here goes. It learns from unleabled videos. This means that the AI is not told what these videos are about. It is just allowed to watch them and find out for itself. And to be able to learn this much information from this kind of unleabled data is just an incredible sign for future projects to come. You see, playing Minecraft well is great, but this is an AI that can learn new things from unstructured data. In the future these agents will be able to perform more general non-gaming related tasks. What a time to be alive! For instance, the amazing DeepMind Lab is already walking this path. First the Rotan AI that could learn to play chess, go and Starcraft 2 and then they used a similar system to solve protein folding. Now, make no mistake, the protein folding system is not exactly the same system as the chess AI, but they share a few building blocks. And here, having two amazing Minecraft AI's appear within a week of each other is a true testament to how quickly projects are coming along in AI research. So, let's apply the first law of papers. What do you think this will be capable of, one more paper down the line? Let me know in the comments below. This video has been supported by weights and biases. Look at this, they have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more. Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jone Fahir."}, {"start": 5.0, "end": 12.52, "text": " Today we are going to see what happens if we unleash an AI to watch Minecraft videos and"}, {"start": 12.52, "end": 15.72, "text": " then ask this AI to play it."}, {"start": 15.72, "end": 20.84, "text": " And the big question is, can it play as well as we humans do?"}, {"start": 20.84, "end": 23.64, "text": " Well, you'll get your answer today."}, {"start": 23.64, "end": 30.64, "text": " This is OpenAI's project, which is not to be confused with Nvidia's similar project called"}, {"start": 30.64, "end": 31.64, "text": " Mindodjo."}, {"start": 31.64, "end": 34.44, "text": " So, what was that?"}, {"start": 34.44, "end": 41.24, "text": " That was an AI project that watched a ton of YouTube tutorial videos on Minecraft."}, {"start": 41.24, "end": 47.6, "text": " Minecraft is an open world sandbox game, one of the most played games in the world, and"}, {"start": 47.6, "end": 53.2, "text": " this is a true sandbox, which means that you can do whatever you wish."}, {"start": 53.2, "end": 56.080000000000005, "text": " You create the story of this game."}, {"start": 56.080000000000005, "end": 61.6, "text": " Gather raw materials, build something, explore, or don't."}, {"start": 61.6, "end": 63.6, "text": " It's all up to you."}, {"start": 63.6, "end": 67.76, "text": " This is a sandbox game and your story after all."}, {"start": 67.76, "end": 73.76, "text": " So what could Nvidia's AI do after watching these tutorial videos?"}, {"start": 73.76, "end": 76.12, "text": " Well, a great deal."}, {"start": 76.12, "end": 83.56, "text": " For instance, we could even use natural language and ask it to explore an ocean monument."}, {"start": 83.56, "end": 89.84, "text": " I really like this one because the word explore is sufficiently vague and open-ended."}, {"start": 89.84, "end": 96.24000000000001, "text": " A human understands what exploring means, but does the AI understand it too?"}, {"start": 96.24000000000001, "end": 99.28, "text": " Well, as you see, it does."}, {"start": 99.28, "end": 100.44, "text": " Amazing."}, {"start": 100.44, "end": 105.36000000000001, "text": " It could also encircle these llamas with offense."}, {"start": 105.36, "end": 112.08, "text": " Or it could also be dropped into an arena with an under-dragon, the final boss of the game,"}, {"start": 112.08, "end": 113.08, "text": " if you will."}, {"start": 113.08, "end": 119.68, "text": " However, look, this is a very limited setting for this task as the dragon is not charging"}, {"start": 119.68, "end": 127.64, "text": " at the AI, but at the very least Nvidia's AI understood what fighting meant in this game."}, {"start": 127.64, "end": 133.84, "text": " Now in this video, I set the following, quoting, starting this game from scratch and building"}, {"start": 133.84, "end": 141.08, "text": " up everything to defeat such a dragon takes a long time horizon and will be an excellent"}, {"start": 141.08, "end": 145.04, "text": " benchmark for the next AI one more paper down the line."}, {"start": 145.04, "end": 148.76, "text": " I'd love to see it perform this, start to end."}, {"start": 148.76, "end": 154.12, "text": " Make sure to subscribe if such a paper appears I'll be here to show it to you."}, {"start": 154.12, "end": 155.36, "text": " End quote."}, {"start": 155.36, "end": 161.36, "text": " And I thought maybe a year or two later there will be some activity here."}, {"start": 161.36, "end": 167.24, "text": " And now hold onto your papers because open AI scientists are here with their own spin"}, {"start": 167.24, "end": 173.36, "text": " of this project not one year later, but just one week later."}, {"start": 173.36, "end": 174.68, "text": " Wow."}, {"start": 174.68, "end": 181.16000000000003, "text": " And with this paper, I also got what I was looking for, which is an AI that understands longer"}, {"start": 181.16000000000003, "end": 182.60000000000002, "text": " time horizons."}, {"start": 182.60000000000002, "end": 185.20000000000002, "text": " You see, they make a huge claim."}, {"start": 185.20000000000002, "end": 191.32000000000002, "text": " They say that this is the first AI that is capable of crafting diamond tools."}, {"start": 191.32, "end": 193.44, "text": " That is very ambitious."}, {"start": 193.44, "end": 195.0, "text": " Can that really be?"}, {"start": 195.0, "end": 198.76, "text": " Well, I say that I will believe it when I see it."}, {"start": 198.76, "end": 201.76, "text": " Do you see this long sequence of sub-tasks?"}, {"start": 201.76, "end": 207.88, "text": " We need to do all this correctly and in the correct order to be able to perform that."}, {"start": 207.88, "end": 214.79999999999998, "text": " This takes up to 24,000 actions and 20 minutes even for a human player."}, {"start": 214.79999999999998, "end": 219.35999999999999, "text": " So let's see if the AI is also up for the task."}, {"start": 219.36, "end": 222.68, "text": " It starts chopping wood right away."}, {"start": 222.68, "end": 225.44000000000003, "text": " Then the planks are coming along."}, {"start": 225.44000000000003, "end": 234.72000000000003, "text": " Later it starts mining, finds iron ore, crafts, a furnace, starts smelting, and at this"}, {"start": 234.72000000000003, "end": 239.0, "text": " point I am starting to think that it might make it."}, {"start": 239.0, "end": 241.36, "text": " That would be absolutely amazing."}, {"start": 241.36, "end": 245.68, "text": " However, not so fast, we still need some diamonds."}, {"start": 245.68, "end": 247.60000000000002, "text": " Can it find some diamonds?"}, {"start": 247.6, "end": 252.28, "text": " It takes a while and oh my, got them."}, {"start": 252.28, "end": 258.4, "text": " And from then on we have established that this is a smart AI, so from here on out this"}, {"start": 258.4, "end": 260.4, "text": " is a piece of cake."}, {"start": 260.4, "end": 261.4, "text": " There we go."}, {"start": 261.4, "end": 263.15999999999997, "text": " I can't believe it."}, {"start": 263.15999999999997, "end": 265.2, "text": " A diamond pickaxe."}, {"start": 265.2, "end": 267.48, "text": " And here is my other favorite."}, {"start": 267.48, "end": 270.44, "text": " It can also perform pillar jumps."}, {"start": 270.44, "end": 271.96, "text": " What is that?"}, {"start": 271.96, "end": 277.44, "text": " This is a cool technique where we jump and while me there we quickly put a block"}, {"start": 277.44, "end": 280.44, "text": " below us and ta-da!"}, {"start": 280.44, "end": 283.24, "text": " We are now standing a little higher."}, {"start": 283.24, "end": 288.6, "text": " If we do this over and over again we will be able to reach places that otherwise would"}, {"start": 288.6, "end": 291.0, "text": " be way out of reach."}, {"start": 291.0, "end": 293.52, "text": " And the AI learned this too."}, {"start": 293.52, "end": 295.44, "text": " How cool is that?"}, {"start": 295.44, "end": 300.44, "text": " And while we are looking at what else it is capable of, I would like to bring up one"}, {"start": 300.44, "end": 302.92, "text": " more remarkable thing here."}, {"start": 302.92, "end": 303.92, "text": " Here goes."}, {"start": 303.92, "end": 306.72, "text": " It learns from unleabled videos."}, {"start": 306.72, "end": 311.48, "text": " This means that the AI is not told what these videos are about."}, {"start": 311.48, "end": 316.16, "text": " It is just allowed to watch them and find out for itself."}, {"start": 316.16, "end": 322.36, "text": " And to be able to learn this much information from this kind of unleabled data is just an"}, {"start": 322.36, "end": 325.92, "text": " incredible sign for future projects to come."}, {"start": 325.92, "end": 332.56, "text": " You see, playing Minecraft well is great, but this is an AI that can learn new things"}, {"start": 332.56, "end": 334.8, "text": " from unstructured data."}, {"start": 334.8, "end": 341.24, "text": " In the future these agents will be able to perform more general non-gaming related tasks."}, {"start": 341.24, "end": 343.36, "text": " What a time to be alive!"}, {"start": 343.36, "end": 348.04, "text": " For instance, the amazing DeepMind Lab is already walking this path."}, {"start": 348.04, "end": 355.76, "text": " First the Rotan AI that could learn to play chess, go and Starcraft 2 and then they used"}, {"start": 355.76, "end": 359.28000000000003, "text": " a similar system to solve protein folding."}, {"start": 359.28, "end": 365.4, "text": " Now, make no mistake, the protein folding system is not exactly the same system as the chess"}, {"start": 365.4, "end": 369.2, "text": " AI, but they share a few building blocks."}, {"start": 369.2, "end": 375.71999999999997, "text": " And here, having two amazing Minecraft AI's appear within a week of each other is a"}, {"start": 375.71999999999997, "end": 381.28, "text": " true testament to how quickly projects are coming along in AI research."}, {"start": 381.28, "end": 385.23999999999995, "text": " So, let's apply the first law of papers."}, {"start": 385.24, "end": 389.76, "text": " What do you think this will be capable of, one more paper down the line?"}, {"start": 389.76, "end": 391.76, "text": " Let me know in the comments below."}, {"start": 391.76, "end": 395.44, "text": " This video has been supported by weights and biases."}, {"start": 395.44, "end": 400.96000000000004, "text": " Look at this, they have a great community forum that aims to make you the best machine"}, {"start": 400.96000000000004, "end": 403.04, "text": " learning engineer you can be."}, {"start": 403.04, "end": 407.94, "text": " You see, I always get messages from you fellow scholars telling me that you have been"}, {"start": 407.94, "end": 413.16, "text": " inspired by the series, but don't really know where to start."}, {"start": 413.16, "end": 414.48, "text": " And here it is."}, {"start": 414.48, "end": 420.16, "text": " In this forum, you can share your projects, ask for advice, look for collaborators and"}, {"start": 420.16, "end": 421.16, "text": " more."}, {"start": 421.16, "end": 429.24, "text": " Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video"}, {"start": 429.24, "end": 430.24, "text": " description."}, {"start": 430.24, "end": 435.40000000000003, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 435.40000000000003, "end": 436.88, "text": " better videos for you."}, {"start": 436.88, "end": 447.2, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=TW2w-z0UtQU
OpenAI’s DALL-E 2: Top 5 New Results! 🤯
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://openai.com/dall-e-2/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Ladybug inpainting: https://www.reddit.com/r/dalle2/comments/veznq2/using_dalles_inpainting_feature_to_fix_up_my/ Michelangelo: https://twitter.com/FLKDayton/status/1543261364315193346 Mona Lisa: https://www.reddit.com/r/dalle2/comments/venhn1/modern_day_mona_lisa_composite_zoomout_video/ Mona Lisa 2: https://twitter.com/karenxcheng/status/1552720889489154048 House: https://www.reddit.com/r/dalle2/comments/vnw3z9/a_house_in_the_middle_of_a_beautiful_lush_field/ The Last Supper: https://twitter.com/gabe_ragland/status/1539005324983578624 Illustrating a fantasy novel: https://twitter.com/wenquai/status/1527312285152452608 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai #dalle
Dear Fellow Scholars, this is two-minute papers with Dr. Karojone Fahir. Today, we are going to talk about OpenAI's amazing image generator AI, Dolly II, and you will see that it can do two things that perhaps not many of you have heard of. So, what is Dolly II? This is a diffusion-based AI model, which means that it starts out from noise and gradually changes it to become a beautiful image like this one. The image can be anything so much so that we can enter a piece of text and magically outcomes an image that feeds this description. It would be hard to overstate how transformative this will be to digital art. Imagine that you could now illustrate any novel and not only that, but it wouldn't even need any additional work for you to write up the text prompt. Why? Well, just take excerpts from the novel and there we go. And if you feel like adding crazy prompt, as you see, it won't complain about that either. But it can do even more. For instance, if we have an image that we really like, a variant generation is also possible. This means that we can provide it an input image, it tries to understand what it depicts and create several versions of it. And it does that spectacularly. But that's not the only little known feature it has. It can also do two other amazing things. One is image-impainting. What is that? Image-impainting means that we take an input image or video cut out an undesirable part and Latin algorithm fill in the holes with data that makes sense given the context of this image. Now, previous techniques were already capable of this, but I am very excited to see what Dolly too can do with this problem because we know that it has a really detailed understanding of the world around us. So, can it use its knowledge of the world to create high-quality impaintings? Oh boy, you were seeing a moment that it can do incredibly well. Let's give it a try. Oh yes, we can delete this squirrel and here comes the key. Oh my, remember that I said that impainting techniques fill in the holes given the context of the image. But Dolly too does something even better. It feels in the image with any context we can think of. Let's try that. How? Well, this image is ridiculous. Squirrels are not known to be able to operate computers, so let's add a hippo instead. And there we go. Much better. Loving it. Now, let's push it to the limits through five outstanding examples. One, the Ladybug. Here's a real photo, but look. Unfortunately, some parts of the photo are out of focus. Oh yes, how about deleting those parts and have them impainted with Dolly too? Could that work? It knows that this is a Ladybug after all. Let's see. Wow. That is much sharper. And fits the image really well. I would have to look for quite a while to find out the trickery. What about you? Let me know in the comments below. So this is fantastic. But you know what is even more fantastic? Now hold onto your papers because it can also do not only image in painting, but image outpainting. What is that? Well, too. I wonder what might be outside of this frame? Well, let's ask Dolly too. And that is insanity. It is really doing it. And we can just zoom out and out and out. And it feels like it never stops. And it just gets crazier over time. If a human would have made this, I would say that human is very creative. But this. This was done by a machine. Is that a hint of creativity in a machine? Hmm. Food for thought. Anyway, I love this one. What a time to be alive. Three. Of course, we need to do that to the Mona Lisa too. What could be outside the frame? Aha. I see. This is a modern take on the old classic, working at the office while everyone else is enjoying life. And note that once again, a prompt was used to generate the theme of the outpainting here. These are the prompts that were used for this example. And these are so good. Let's do a couple more. Four. Zooming out from a house in the middle of a lush field to this. A very artistic. And it turns out, wow, that we are on a floating island in the middle of the universe with a twist. And last but not least, five, the last supper. But this time, let's not zoom out, but let's try to answer one of history's big questions. And that is, what might be going on in the adjacent rooms? Look. I love it. And I wonder what Dolly 3 will be capable of. Remember Dolly 2 came out just about a year after the first version and it was Leaps and Bounds better. And the competition is also coming. Google already has their image and party AIs and even that is just scratching the surface. As you see, the pace of progress in AI research is absolutely incredible. These tools are going to transform the world around us as we know it, especially now that OpenAI is aiming to deploy this AI to no less than one million people in the near future. How cool is that? Democritizing art creation for all of us. Soon. Bravo. So, does this get your mind going? Do you have any thoughts as to what Dolly 3 could do? Let me know in the comments below. If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold onto your papers because with Lambda GPU cloud, you can get on demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojone Fahir."}, {"start": 4.6000000000000005, "end": 10.24, "text": " Today, we are going to talk about OpenAI's amazing image generator AI,"}, {"start": 10.24, "end": 18.2, "text": " Dolly II, and you will see that it can do two things that perhaps not many of you have heard of."}, {"start": 18.2, "end": 20.6, "text": " So, what is Dolly II?"}, {"start": 20.6, "end": 26.2, "text": " This is a diffusion-based AI model, which means that it starts out from noise"}, {"start": 26.2, "end": 31.8, "text": " and gradually changes it to become a beautiful image like this one."}, {"start": 31.8, "end": 39.0, "text": " The image can be anything so much so that we can enter a piece of text and magically"}, {"start": 39.0, "end": 42.4, "text": " outcomes an image that feeds this description."}, {"start": 42.4, "end": 47.6, "text": " It would be hard to overstate how transformative this will be to digital art."}, {"start": 47.6, "end": 53.2, "text": " Imagine that you could now illustrate any novel and not only that,"}, {"start": 53.2, "end": 58.800000000000004, "text": " but it wouldn't even need any additional work for you to write up the text prompt."}, {"start": 58.800000000000004, "end": 59.800000000000004, "text": " Why?"}, {"start": 59.800000000000004, "end": 64.4, "text": " Well, just take excerpts from the novel and there we go."}, {"start": 64.4, "end": 68.0, "text": " And if you feel like adding crazy prompt,"}, {"start": 68.0, "end": 71.4, "text": " as you see, it won't complain about that either."}, {"start": 71.4, "end": 74.2, "text": " But it can do even more."}, {"start": 74.2, "end": 77.80000000000001, "text": " For instance, if we have an image that we really like,"}, {"start": 77.80000000000001, "end": 80.60000000000001, "text": " a variant generation is also possible."}, {"start": 80.6, "end": 84.0, "text": " This means that we can provide it an input image,"}, {"start": 84.0, "end": 89.8, "text": " it tries to understand what it depicts and create several versions of it."}, {"start": 89.8, "end": 93.0, "text": " And it does that spectacularly."}, {"start": 93.0, "end": 96.8, "text": " But that's not the only little known feature it has."}, {"start": 96.8, "end": 101.0, "text": " It can also do two other amazing things."}, {"start": 101.0, "end": 103.8, "text": " One is image-impainting."}, {"start": 103.8, "end": 105.19999999999999, "text": " What is that?"}, {"start": 105.19999999999999, "end": 109.8, "text": " Image-impainting means that we take an input image or video"}, {"start": 109.8, "end": 114.0, "text": " cut out an undesirable part and Latin algorithm"}, {"start": 114.0, "end": 120.0, "text": " fill in the holes with data that makes sense given the context of this image."}, {"start": 120.0, "end": 124.0, "text": " Now, previous techniques were already capable of this,"}, {"start": 124.0, "end": 129.0, "text": " but I am very excited to see what Dolly too can do with this problem"}, {"start": 129.0, "end": 135.2, "text": " because we know that it has a really detailed understanding of the world around us."}, {"start": 135.2, "end": 140.79999999999998, "text": " So, can it use its knowledge of the world to create high-quality impaintings?"}, {"start": 140.79999999999998, "end": 145.2, "text": " Oh boy, you were seeing a moment that it can do incredibly well."}, {"start": 145.2, "end": 147.0, "text": " Let's give it a try."}, {"start": 147.0, "end": 152.0, "text": " Oh yes, we can delete this squirrel and here comes the key."}, {"start": 152.0, "end": 156.0, "text": " Oh my, remember that I said that impainting techniques"}, {"start": 156.0, "end": 160.2, "text": " fill in the holes given the context of the image."}, {"start": 160.2, "end": 163.79999999999998, "text": " But Dolly too does something even better."}, {"start": 163.8, "end": 168.20000000000002, "text": " It feels in the image with any context we can think of."}, {"start": 168.20000000000002, "end": 169.60000000000002, "text": " Let's try that."}, {"start": 169.60000000000002, "end": 170.60000000000002, "text": " How?"}, {"start": 170.60000000000002, "end": 173.0, "text": " Well, this image is ridiculous."}, {"start": 173.0, "end": 176.60000000000002, "text": " Squirrels are not known to be able to operate computers,"}, {"start": 176.60000000000002, "end": 179.8, "text": " so let's add a hippo instead."}, {"start": 179.8, "end": 181.8, "text": " And there we go."}, {"start": 181.8, "end": 183.60000000000002, "text": " Much better."}, {"start": 183.60000000000002, "end": 184.60000000000002, "text": " Loving it."}, {"start": 184.60000000000002, "end": 190.0, "text": " Now, let's push it to the limits through five outstanding examples."}, {"start": 190.0, "end": 192.0, "text": " One, the Ladybug."}, {"start": 192.0, "end": 195.2, "text": " Here's a real photo, but look."}, {"start": 195.2, "end": 199.4, "text": " Unfortunately, some parts of the photo are out of focus."}, {"start": 199.4, "end": 206.6, "text": " Oh yes, how about deleting those parts and have them impainted with Dolly too?"}, {"start": 206.6, "end": 207.6, "text": " Could that work?"}, {"start": 207.6, "end": 210.8, "text": " It knows that this is a Ladybug after all."}, {"start": 210.8, "end": 212.8, "text": " Let's see."}, {"start": 212.8, "end": 213.8, "text": " Wow."}, {"start": 213.8, "end": 215.8, "text": " That is much sharper."}, {"start": 215.8, "end": 218.6, "text": " And fits the image really well."}, {"start": 218.6, "end": 223.51999999999998, "text": " I would have to look for quite a while to find out the trickery."}, {"start": 223.51999999999998, "end": 224.51999999999998, "text": " What about you?"}, {"start": 224.51999999999998, "end": 226.4, "text": " Let me know in the comments below."}, {"start": 226.4, "end": 228.72, "text": " So this is fantastic."}, {"start": 228.72, "end": 232.2, "text": " But you know what is even more fantastic?"}, {"start": 232.2, "end": 239.79999999999998, "text": " Now hold onto your papers because it can also do not only image in painting, but image"}, {"start": 239.79999999999998, "end": 241.6, "text": " outpainting."}, {"start": 241.6, "end": 242.6, "text": " What is that?"}, {"start": 242.6, "end": 244.56, "text": " Well, too."}, {"start": 244.56, "end": 248.52, "text": " I wonder what might be outside of this frame?"}, {"start": 248.52, "end": 250.68, "text": " Well, let's ask Dolly too."}, {"start": 250.68, "end": 254.04, "text": " And that is insanity."}, {"start": 254.04, "end": 256.28000000000003, "text": " It is really doing it."}, {"start": 256.28000000000003, "end": 259.72, "text": " And we can just zoom out and out and out."}, {"start": 259.72, "end": 262.52, "text": " And it feels like it never stops."}, {"start": 262.52, "end": 265.2, "text": " And it just gets crazier over time."}, {"start": 265.2, "end": 270.4, "text": " If a human would have made this, I would say that human is very creative."}, {"start": 270.4, "end": 271.4, "text": " But this."}, {"start": 271.4, "end": 273.4, "text": " This was done by a machine."}, {"start": 273.4, "end": 276.71999999999997, "text": " Is that a hint of creativity in a machine?"}, {"start": 276.71999999999997, "end": 277.71999999999997, "text": " Hmm."}, {"start": 277.71999999999997, "end": 278.71999999999997, "text": " Food for thought."}, {"start": 278.71999999999997, "end": 281.47999999999996, "text": " Anyway, I love this one."}, {"start": 281.47999999999996, "end": 283.56, "text": " What a time to be alive."}, {"start": 283.56, "end": 284.56, "text": " Three."}, {"start": 284.56, "end": 288.64, "text": " Of course, we need to do that to the Mona Lisa too."}, {"start": 288.64, "end": 291.2, "text": " What could be outside the frame?"}, {"start": 291.2, "end": 292.2, "text": " Aha."}, {"start": 292.2, "end": 294.2, "text": " I see."}, {"start": 294.2, "end": 300.2, "text": " This is a modern take on the old classic, working at the office while everyone else is enjoying"}, {"start": 300.2, "end": 301.2, "text": " life."}, {"start": 301.2, "end": 306.24, "text": " And note that once again, a prompt was used to generate the theme of the outpainting"}, {"start": 306.24, "end": 307.44, "text": " here."}, {"start": 307.44, "end": 310.59999999999997, "text": " These are the prompts that were used for this example."}, {"start": 310.59999999999997, "end": 312.68, "text": " And these are so good."}, {"start": 312.68, "end": 314.92, "text": " Let's do a couple more."}, {"start": 314.92, "end": 315.92, "text": " Four."}, {"start": 315.92, "end": 320.15999999999997, "text": " Zooming out from a house in the middle of a lush field to this."}, {"start": 320.15999999999997, "end": 322.48, "text": " A very artistic."}, {"start": 322.48, "end": 329.84, "text": " And it turns out, wow, that we are on a floating island in the middle of the universe with"}, {"start": 329.84, "end": 332.28, "text": " a twist."}, {"start": 332.28, "end": 336.88, "text": " And last but not least, five, the last supper."}, {"start": 336.88, "end": 344.0, "text": " But this time, let's not zoom out, but let's try to answer one of history's big questions."}, {"start": 344.0, "end": 348.64, "text": " And that is, what might be going on in the adjacent rooms?"}, {"start": 348.64, "end": 349.64, "text": " Look."}, {"start": 349.64, "end": 352.35999999999996, "text": " I love it."}, {"start": 352.35999999999996, "end": 356.23999999999995, "text": " And I wonder what Dolly 3 will be capable of."}, {"start": 356.24, "end": 363.16, "text": " Remember Dolly 2 came out just about a year after the first version and it was Leaps and"}, {"start": 363.16, "end": 364.64, "text": " Bounds better."}, {"start": 364.64, "end": 367.32, "text": " And the competition is also coming."}, {"start": 367.32, "end": 374.72, "text": " Google already has their image and party AIs and even that is just scratching the surface."}, {"start": 374.72, "end": 380.36, "text": " As you see, the pace of progress in AI research is absolutely incredible."}, {"start": 380.36, "end": 385.56, "text": " These tools are going to transform the world around us as we know it, especially now that"}, {"start": 385.56, "end": 394.28000000000003, "text": " OpenAI is aiming to deploy this AI to no less than one million people in the near future."}, {"start": 394.28000000000003, "end": 396.68, "text": " How cool is that?"}, {"start": 396.68, "end": 399.52, "text": " Democritizing art creation for all of us."}, {"start": 399.52, "end": 400.52, "text": " Soon."}, {"start": 400.52, "end": 401.52, "text": " Bravo."}, {"start": 401.52, "end": 404.04, "text": " So, does this get your mind going?"}, {"start": 404.04, "end": 407.8, "text": " Do you have any thoughts as to what Dolly 3 could do?"}, {"start": 407.8, "end": 409.84000000000003, "text": " Let me know in the comments below."}, {"start": 409.84, "end": 416.76, "text": " If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices"}, {"start": 416.76, "end": 420.23999999999995, "text": " in the world for GPU cloud compute."}, {"start": 420.23999999999995, "end": 423.15999999999997, "text": " No commitments or negotiation required."}, {"start": 423.15999999999997, "end": 426.32, "text": " Just sign up and launch an instance."}, {"start": 426.32, "end": 433.91999999999996, "text": " And hold onto your papers because with Lambda GPU cloud, you can get on demand A100 instances"}, {"start": 433.92, "end": 440.28000000000003, "text": " for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 440.28000000000003, "end": 443.36, "text": " That's 73% savings."}, {"start": 443.36, "end": 446.84000000000003, "text": " Did I mention they also offer persistent storage?"}, {"start": 446.84000000000003, "end": 455.0, "text": " So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances,"}, {"start": 455.0, "end": 457.40000000000003, "text": " workstations or servers."}, {"start": 457.40000000000003, "end": 463.84000000000003, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 463.84, "end": 464.84, "text": " instances today."}, {"start": 464.84, "end": 493.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=W1UDzxtrhes
NVIDIA’s Ray Tracer - Finally, Real Time! ☀️
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Rearchitecting Spatiotemporal Resampling for Production" is available here: https://research.nvidia.com/publication/2021-07_Rearchitecting-Spatiotemporal-Resampling 📝 Our paper with the spheres scene that took 3 weeks is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ The denoiser: https://developer.nvidia.com/nvidia-rt-denoiser ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jorna Ifeher. I can't believe that I am saying this, but today we are going to see how Nvidia is getting closer and closer to solving an almost impossible problem. And that is, writing real-time light transport simulations. What is that? And why is it nearly impossible? Well, if we wish to create a truly photorealistic scene in computer graphics, we usually reach out to a light transport simulation algorithm and then this happens. Oh yes, concept number one. Noise. This is not photorealistic at all, not yet anyway. Why is that? Well, during this process we have to shoot millions and millions of light rays into the scene to estimate how much light is bouncing around and before we have simulated enough rays, the inaccuracies in our estimations show up as noise in these images. This clears up over time, but it may take from minutes to days for this to happen even for a smaller scene. For instance, this one took us three full weeks to finish. Three weeks. Ouch. Now, earlier, we talked about this technique which could take complex geometry and 3.4 million light sources and it could really render not just an image, but an animation of it interactively. But how? Well, the magic behind all this is a smarter allocation of these ray samples that we have to shoot into the scene. For instance, this technique does not forget what we did just a moment ago when we move the camera a little and advance to the next image. Thus, lots of information that is otherwise thrown away can now be reused as we advance the animation. And note that even then these smooth, beautiful images are not what we get directly. If we look under the hood, we see that the raw result, we get something like this. Oh yes, still a noisy image. But wait, don't despair. We don't have to live with these noisy images. We have the noisy algorithms tailored for light simulations. This one does some serious legwork with this noisy input. And in a follow-up paper, they also went on to tackle these notoriously difficult photorealistic smoke plumes, volumetric bunnies, and even explosions interactively. These results were once again noise filtered to nearly perfection. Not to perfection, but a step closer to it than before. Now note that I used the word interactively twice here. I did not say real time. And that is not by mistake. These techniques are absolutely fantastic. One of the bigger leaves in light transport research, but they still cost a touch more than what production systems can shoulder. They are not quite real time yet. So, what did they do? Did they stop there? Well, of course not. They rolled up the sleeves and continued. And now, I hope you know what's coming. Oh yes, have a look at this newer paper they have in this area. This is the result on the Paris Opera House scene, which is quite detailed. There is a ton going on here. And you are all experience fellow scholars now. So, when you see them flicking between the raw, noisy, and the denoise results, you know exactly what is going on. And hold on to your papers because all this takes about 12 milliseconds per frame. Yes, yes, yes. My goodness, that is finally in the real time domain and then some. What a time to be alive. Okay, so where's the catch? Our keen eyes see that this is a static scene. It probably can deal with dynamic movements and rapid changes in lighting. Can it? Well, let's have a look. Wow, I cannot believe my eyes. Dynamic movement, checkmark. And here this is as much changing in the lighting as anyone would ever want. And it can do this too. I absolutely love it. And remember the amusement park scene from the previous paper, the one with 23 million triangles for the geometry and over 3 million light sources. Well, here are the raw results. And after the noisy, this looks super clean. Wow. So, how long do we have to wait for this? This can't be real time, right? Well, all this takes is about 12 milliseconds per frame. Again, and this is where I fell off the chair when reading this paper. Of course, not even this technique is perfect. The glossary reflections are a little blurry at places and artifacts in the lighting can still appear. But if this is not a quantum leap in light transport research, I don't know what is. Plus, if you wish to see some properly detailed comparisons against previous techniques, make sure to have a look at the paper. And if you have been holding onto your papers so far, now squeeze that paper because everything that you see in this paper was done by two people. Huge congratulations, Chris and Alexei. And if you are wondering, if we ever get to use these techniques, don't forget Nvidia's Marbles demo is already out there for all of us to use. And it gets better, for instance, not many know that they already have a denoising technique that is online and ready to use for all of us. This one is a professional great tool right there. This is really incredible. They have so many tools out there for us to use. I check what Nvidia is up to daily. And I still quite often get surprised about how much they have going on. And now, finally, real time light transport in our lifetimes. Oh yes, this paper is history in the making. This episode has been supported by Cohear AI. Cohear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns or shipping or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Cohear.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jorna Ifeher."}, {"start": 4.72, "end": 12.16, "text": " I can't believe that I am saying this, but today we are going to see how Nvidia is getting"}, {"start": 12.16, "end": 19.44, "text": " closer and closer to solving an almost impossible problem. And that is, writing real-time"}, {"start": 19.44, "end": 27.36, "text": " light transport simulations. What is that? And why is it nearly impossible? Well, if we wish to"}, {"start": 27.36, "end": 34.24, "text": " create a truly photorealistic scene in computer graphics, we usually reach out to a light transport"}, {"start": 34.24, "end": 43.6, "text": " simulation algorithm and then this happens. Oh yes, concept number one. Noise. This is not photorealistic"}, {"start": 43.6, "end": 51.84, "text": " at all, not yet anyway. Why is that? Well, during this process we have to shoot millions and millions"}, {"start": 51.84, "end": 58.480000000000004, "text": " of light rays into the scene to estimate how much light is bouncing around and before we have"}, {"start": 58.480000000000004, "end": 65.2, "text": " simulated enough rays, the inaccuracies in our estimations show up as noise in these images."}, {"start": 65.84, "end": 73.28, "text": " This clears up over time, but it may take from minutes to days for this to happen even for a"}, {"start": 73.28, "end": 81.28, "text": " smaller scene. For instance, this one took us three full weeks to finish. Three weeks. Ouch."}, {"start": 82.08, "end": 90.24000000000001, "text": " Now, earlier, we talked about this technique which could take complex geometry and 3.4 million"}, {"start": 90.24000000000001, "end": 97.44, "text": " light sources and it could really render not just an image, but an animation of it interactively."}, {"start": 97.44, "end": 105.92, "text": " But how? Well, the magic behind all this is a smarter allocation of these ray samples that we"}, {"start": 105.92, "end": 113.03999999999999, "text": " have to shoot into the scene. For instance, this technique does not forget what we did just a moment"}, {"start": 113.03999999999999, "end": 120.96, "text": " ago when we move the camera a little and advance to the next image. Thus, lots of information that"}, {"start": 120.96, "end": 129.28, "text": " is otherwise thrown away can now be reused as we advance the animation. And note that even then"}, {"start": 129.28, "end": 135.92, "text": " these smooth, beautiful images are not what we get directly. If we look under the hood, we see"}, {"start": 135.92, "end": 144.48, "text": " that the raw result, we get something like this. Oh yes, still a noisy image. But wait, don't"}, {"start": 144.48, "end": 150.88, "text": " despair. We don't have to live with these noisy images. We have the noisy algorithms tailored"}, {"start": 150.88, "end": 157.2, "text": " for light simulations. This one does some serious legwork with this noisy input. And in a"}, {"start": 157.2, "end": 163.6, "text": " follow-up paper, they also went on to tackle these notoriously difficult photorealistic smoke"}, {"start": 163.6, "end": 172.23999999999998, "text": " plumes, volumetric bunnies, and even explosions interactively. These results were once again"}, {"start": 172.24, "end": 179.12, "text": " noise filtered to nearly perfection. Not to perfection, but a step closer to it than before."}, {"start": 179.76000000000002, "end": 187.44, "text": " Now note that I used the word interactively twice here. I did not say real time. And that"}, {"start": 187.44, "end": 193.36, "text": " is not by mistake. These techniques are absolutely fantastic. One of the bigger leaves in"}, {"start": 193.36, "end": 199.44, "text": " light transport research, but they still cost a touch more than what production systems can"}, {"start": 199.44, "end": 207.2, "text": " shoulder. They are not quite real time yet. So, what did they do? Did they stop there? Well,"}, {"start": 207.2, "end": 213.44, "text": " of course not. They rolled up the sleeves and continued. And now, I hope you know what's coming."}, {"start": 214.0, "end": 221.44, "text": " Oh yes, have a look at this newer paper they have in this area. This is the result on the Paris Opera"}, {"start": 221.44, "end": 228.32, "text": " House scene, which is quite detailed. There is a ton going on here. And you are all experience"}, {"start": 228.32, "end": 234.79999999999998, "text": " fellow scholars now. So, when you see them flicking between the raw, noisy, and the denoise"}, {"start": 234.79999999999998, "end": 241.68, "text": " results, you know exactly what is going on. And hold on to your papers because all this takes"}, {"start": 241.68, "end": 250.88, "text": " about 12 milliseconds per frame. Yes, yes, yes. My goodness, that is finally in the real time domain"}, {"start": 250.88, "end": 259.36, "text": " and then some. What a time to be alive. Okay, so where's the catch? Our keen eyes see that this"}, {"start": 259.36, "end": 265.92, "text": " is a static scene. It probably can deal with dynamic movements and rapid changes in lighting."}, {"start": 265.92, "end": 274.15999999999997, "text": " Can it? Well, let's have a look. Wow, I cannot believe my eyes. Dynamic movement,"}, {"start": 274.16, "end": 281.84000000000003, "text": " checkmark. And here this is as much changing in the lighting as anyone would ever want. And it"}, {"start": 281.84000000000003, "end": 290.16, "text": " can do this too. I absolutely love it. And remember the amusement park scene from the previous paper,"}, {"start": 290.16, "end": 298.0, "text": " the one with 23 million triangles for the geometry and over 3 million light sources. Well,"}, {"start": 298.0, "end": 307.04, "text": " here are the raw results. And after the noisy, this looks super clean. Wow. So, how long do we have"}, {"start": 307.04, "end": 314.4, "text": " to wait for this? This can't be real time, right? Well, all this takes is about 12 milliseconds"}, {"start": 314.4, "end": 320.96, "text": " per frame. Again, and this is where I fell off the chair when reading this paper. Of course,"}, {"start": 320.96, "end": 326.8, "text": " not even this technique is perfect. The glossary reflections are a little blurry at places"}, {"start": 326.8, "end": 333.2, "text": " and artifacts in the lighting can still appear. But if this is not a quantum leap in light"}, {"start": 333.2, "end": 339.52000000000004, "text": " transport research, I don't know what is. Plus, if you wish to see some properly detailed comparisons"}, {"start": 339.52000000000004, "end": 344.88, "text": " against previous techniques, make sure to have a look at the paper. And if you have been holding"}, {"start": 344.88, "end": 352.0, "text": " onto your papers so far, now squeeze that paper because everything that you see in this paper"}, {"start": 352.0, "end": 358.88, "text": " was done by two people. Huge congratulations, Chris and Alexei. And if you are wondering,"}, {"start": 358.88, "end": 365.2, "text": " if we ever get to use these techniques, don't forget Nvidia's Marbles demo is already out there"}, {"start": 365.2, "end": 372.16, "text": " for all of us to use. And it gets better, for instance, not many know that they already have"}, {"start": 372.16, "end": 378.96, "text": " a denoising technique that is online and ready to use for all of us. This one is a professional"}, {"start": 378.96, "end": 386.4, "text": " great tool right there. This is really incredible. They have so many tools out there for us to use."}, {"start": 386.4, "end": 393.84, "text": " I check what Nvidia is up to daily. And I still quite often get surprised about how much they"}, {"start": 393.84, "end": 401.67999999999995, "text": " have going on. And now, finally, real time light transport in our lifetimes. Oh yes, this paper"}, {"start": 401.68, "end": 409.36, "text": " is history in the making. This episode has been supported by Cohear AI. Cohear builds large language"}, {"start": 409.36, "end": 415.76, "text": " models and makes them available through an API so businesses can add advanced language understanding"}, {"start": 415.76, "end": 423.36, "text": " to their system or app quickly with just one line of code. You can use your own data, whether it's"}, {"start": 423.36, "end": 430.08, "text": " text from customer service requests, legal contracts or social media posts to create your own"}, {"start": 430.08, "end": 437.91999999999996, "text": " custom models to understand text or even generated. For instance, it can be used to automatically"}, {"start": 437.91999999999996, "end": 445.52, "text": " determine whether your messages are about your business hours, returns or shipping or it can be"}, {"start": 445.52, "end": 451.12, "text": " used to generate a list of possible sentences you can use for your product descriptions."}, {"start": 451.12, "end": 458.08, "text": " Make sure to go to Cohear.ai slash papers or click the link in the video description and give"}, {"start": 458.08, "end": 463.76, "text": " it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll"}, {"start": 463.76, "end": 493.59999999999997, "text": " see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XgdgSHweBUI
Google’s Parti AI: Magical Results! 💫
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Google Parti: Pathways Autoregressive Text-to-Image model" is available here: https://parti.research.google/ 4 of my favorite prompts from the video (add these to benchmarks if you feel like it): - surprised scholars looking at a magical parchment emitting magic dust high detail digital art disney style - scholar delighted by a very long disintegrating magical parchment with sparks and smoke coming out of it fantasy digital art disney style - stern looking fox in a labcoat, casting a magic spell, digital art - shiny cybertronic robot frog with leds studio lighting high detail digital art 📝 The fluid control works are available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/real_time_fluid_control_eg/ https://users.cg.tuwien.ac.at/zsolnai/gfx/fluid_control_msc_thesis/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers 0:00 Google Parti 0:27 OpenAI's DALL-E 2 0:52 The problem 1:29 Google Imagen 2:09 Finally, Google Parti appears 2:22 1. Napoleon Cat returns 3:02 2. Water crocodile 3:14 3. Creativity in a machine 3:40 Why does this exist? 4:33 How does this help? 4:53 Let's test a huge prompt! 5:25 Watch it learn! 6:20 A new benchmark 6:36 More results 7:05 The age of AI generated images is here Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #parti #dalle
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajol Leifahir. I cannot tell you how excited I am by this paper. Wow! Today, you will see more incredible images generated by Google's newest AI called Party. Yes, this is not from OpenAI, but from Google. So, what is going on here? Just a few months ago, OpenAI's Image Generator AI called Dolly II took the world by storm. With that, you could name almost anything, Katnopolian, a teddy bear on a skateboard, on Times Square, a basketball player, dunking as an explosion of a nebula, and it was able to create an appropriate image for it. It is also good enough to be used in our own thumbnails. However, there was one interesting thing about it. What do you think the prompt for this must have been? Hmm, not easy, right? Well, it was a sign that says Deep Learning. Oh yes, this was one of the failure cases. Please remember this. Now, we always say that in research, do not look at where we are, always look at where we will be, two more papers down the line. That is the first law of papers. However, we didn't even make it to two more papers down the line. What's more, we barely made two months down the line, and scientists at Google came up with an amazing follow-up paper. It was called Imogen. Imogen was absolutely incredible as it could now finally synthesize text properly. It could also understand when we say a couple of glasses on a table, and one this little linguistic battle against OpenAI's Dolly II. And all this just two months after it. That is absolutely amazing. But hold on to your papers because you won't believe this one. I certainly didn't when I saw it first. Just about one month after Imogen, here is an even newer paper on AI image generation called Party. That is fantastic. Welcome to our world, little AI. But why does this exist? Well, this is why. I'll explain in a moment, but first, let's have a look at what it can do through three of my favorite examples, and then we'll discuss why it exists. One, let's start with the banger, and recreate the legendary Napoleon cat with the new method. This is Dolly II's solution, and let's see the new one. I cannot believe it. This is at least as good as Dolly II's legendary solution, and I have to say maybe even a touch better. What a time to be alive. Two, a crocodile made of water. As someone who has spent some time researching, controlling, fluid, and smoke simulations, this one is highly appreciated. Three, a detailed Athenian vase with Egyptian hieroglyphics. And more. I love how Party was able to bring all of these concepts together into one coherent solution. This may be subjective, but if someone told me that a person made this, I would say that person is quite creative. But creativity in a machine, how cool is that? Now, remember this image, I said that this is why Party exists. So, what is going on here? Well, look, the two previous techniques used a diffusion-based model. This means that when we ask it something, it starts out from noise and over time, it learns to organize these pixels to form a beautiful image that matches our description better. Now, look. Aha! This Party technique is not a diffusion-based model, it is an autoregressive model. What does that mean? It means that it uses no diffusion, it does not create a piece of noise and refine it into an image. Mm-hmm, no sir. Instead, it thinks of an image as a collection of little puzzle pieces. Why? Well, this hopefully helps with two other shortcomings of Dolly II. One is generating a specific number of objects that did not work too well before. And it can also deal with super long prompts, much longer than previous ones. You got me excited now? You know what? Let's test that right now together. Let's add this prompt. Oh my goodness, now that is a long prompt. Who can paint this? Almost nobody. In fact, it is the description of Van Gogh's Starry Night without saying that we are looking for Starry Night. I am itching to see this. And... Oh wow. All of them are lovely. Now, we have two more really spectacular things about this paper. One, we can witness how it learns to draw these beautiful images as we increase the model size, which roughly tells us how capable the AI is. We can take a smaller model and ask it to create a kangaroo with sunglasses and a sign that says hello friends and we get this. Well, that is a start. It can't really write yet and the details are lacking. But when we use the same architecture with the difference that we increase the model size to be about 50 times bigger, we get this. Oh my goodness. It not only learns to write, but the quality of the output is also leaps and bounds better. What do you think what results would another 50 fold increase result in? That must be something truly incredible. Let me know in the comments below. Alongside the paper, scientists at Google also released a bunch of prompts as a benchmark for testing future image generator AI's. And yes, there are some good ones in there, but if I may make some recommendations, I would love to see some of these prompts of mine in such a benchmark. For instance, the Fox scientists, the scholars and the cyber frog are very well received by you fellow scholars and it would be super cool to be able to compare how new, more elaborate AI models are able to deal with these. I also put a text version of these in the video description if someone is interested. So it's official. The age of beautiful AI generated images is now here. Does this get your mind going? What do you think? Let me know in the comments below. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance and hold on to your papers because with Lambda GPU cloud, you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage, so join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajol Leifahir."}, {"start": 5.0, "end": 9.0, "text": " I cannot tell you how excited I am by this paper."}, {"start": 9.0, "end": 10.0, "text": " Wow!"}, {"start": 10.0, "end": 17.0, "text": " Today, you will see more incredible images generated by Google's newest AI called Party."}, {"start": 17.0, "end": 22.0, "text": " Yes, this is not from OpenAI, but from Google."}, {"start": 22.0, "end": 24.0, "text": " So, what is going on here?"}, {"start": 24.0, "end": 31.0, "text": " Just a few months ago, OpenAI's Image Generator AI called Dolly II took the world by storm."}, {"start": 31.0, "end": 40.0, "text": " With that, you could name almost anything, Katnopolian, a teddy bear on a skateboard, on Times Square, a basketball player,"}, {"start": 40.0, "end": 46.0, "text": " dunking as an explosion of a nebula, and it was able to create an appropriate image for it."}, {"start": 46.0, "end": 50.0, "text": " It is also good enough to be used in our own thumbnails."}, {"start": 50.0, "end": 54.0, "text": " However, there was one interesting thing about it."}, {"start": 54.0, "end": 57.0, "text": " What do you think the prompt for this must have been?"}, {"start": 57.0, "end": 60.0, "text": " Hmm, not easy, right?"}, {"start": 60.0, "end": 65.0, "text": " Well, it was a sign that says Deep Learning."}, {"start": 65.0, "end": 68.0, "text": " Oh yes, this was one of the failure cases."}, {"start": 68.0, "end": 70.0, "text": " Please remember this."}, {"start": 70.0, "end": 78.0, "text": " Now, we always say that in research, do not look at where we are, always look at where we will be, two more papers down the line."}, {"start": 78.0, "end": 81.0, "text": " That is the first law of papers."}, {"start": 81.0, "end": 85.0, "text": " However, we didn't even make it to two more papers down the line."}, {"start": 85.0, "end": 94.0, "text": " What's more, we barely made two months down the line, and scientists at Google came up with an amazing follow-up paper."}, {"start": 94.0, "end": 97.0, "text": " It was called Imogen."}, {"start": 97.0, "end": 104.0, "text": " Imogen was absolutely incredible as it could now finally synthesize text properly."}, {"start": 104.0, "end": 115.0, "text": " It could also understand when we say a couple of glasses on a table, and one this little linguistic battle against OpenAI's Dolly II."}, {"start": 115.0, "end": 118.0, "text": " And all this just two months after it."}, {"start": 118.0, "end": 121.0, "text": " That is absolutely amazing."}, {"start": 121.0, "end": 125.0, "text": " But hold on to your papers because you won't believe this one."}, {"start": 125.0, "end": 128.0, "text": " I certainly didn't when I saw it first."}, {"start": 128.0, "end": 137.0, "text": " Just about one month after Imogen, here is an even newer paper on AI image generation called Party."}, {"start": 137.0, "end": 139.0, "text": " That is fantastic."}, {"start": 139.0, "end": 141.0, "text": " Welcome to our world, little AI."}, {"start": 141.0, "end": 144.0, "text": " But why does this exist?"}, {"start": 144.0, "end": 146.0, "text": " Well, this is why."}, {"start": 146.0, "end": 156.0, "text": " I'll explain in a moment, but first, let's have a look at what it can do through three of my favorite examples, and then we'll discuss why it exists."}, {"start": 156.0, "end": 163.0, "text": " One, let's start with the banger, and recreate the legendary Napoleon cat with the new method."}, {"start": 163.0, "end": 168.0, "text": " This is Dolly II's solution, and let's see the new one."}, {"start": 168.0, "end": 170.0, "text": " I cannot believe it."}, {"start": 170.0, "end": 179.0, "text": " This is at least as good as Dolly II's legendary solution, and I have to say maybe even a touch better."}, {"start": 179.0, "end": 181.0, "text": " What a time to be alive."}, {"start": 181.0, "end": 185.0, "text": " Two, a crocodile made of water."}, {"start": 185.0, "end": 194.0, "text": " As someone who has spent some time researching, controlling, fluid, and smoke simulations, this one is highly appreciated."}, {"start": 194.0, "end": 199.0, "text": " Three, a detailed Athenian vase with Egyptian hieroglyphics."}, {"start": 199.0, "end": 200.0, "text": " And more."}, {"start": 200.0, "end": 208.0, "text": " I love how Party was able to bring all of these concepts together into one coherent solution."}, {"start": 208.0, "end": 216.0, "text": " This may be subjective, but if someone told me that a person made this, I would say that person is quite creative."}, {"start": 216.0, "end": 221.0, "text": " But creativity in a machine, how cool is that?"}, {"start": 221.0, "end": 226.0, "text": " Now, remember this image, I said that this is why Party exists."}, {"start": 226.0, "end": 229.0, "text": " So, what is going on here?"}, {"start": 229.0, "end": 234.0, "text": " Well, look, the two previous techniques used a diffusion-based model."}, {"start": 234.0, "end": 248.0, "text": " This means that when we ask it something, it starts out from noise and over time, it learns to organize these pixels to form a beautiful image that matches our description better."}, {"start": 248.0, "end": 250.0, "text": " Now, look."}, {"start": 250.0, "end": 251.0, "text": " Aha!"}, {"start": 251.0, "end": 257.0, "text": " This Party technique is not a diffusion-based model, it is an autoregressive model."}, {"start": 257.0, "end": 258.0, "text": " What does that mean?"}, {"start": 258.0, "end": 265.0, "text": " It means that it uses no diffusion, it does not create a piece of noise and refine it into an image."}, {"start": 265.0, "end": 267.0, "text": " Mm-hmm, no sir."}, {"start": 267.0, "end": 273.0, "text": " Instead, it thinks of an image as a collection of little puzzle pieces."}, {"start": 273.0, "end": 274.0, "text": " Why?"}, {"start": 274.0, "end": 279.0, "text": " Well, this hopefully helps with two other shortcomings of Dolly II."}, {"start": 279.0, "end": 285.0, "text": " One is generating a specific number of objects that did not work too well before."}, {"start": 285.0, "end": 291.0, "text": " And it can also deal with super long prompts, much longer than previous ones."}, {"start": 291.0, "end": 293.0, "text": " You got me excited now?"}, {"start": 293.0, "end": 294.0, "text": " You know what?"}, {"start": 294.0, "end": 296.0, "text": " Let's test that right now together."}, {"start": 296.0, "end": 298.0, "text": " Let's add this prompt."}, {"start": 298.0, "end": 302.0, "text": " Oh my goodness, now that is a long prompt."}, {"start": 302.0, "end": 304.0, "text": " Who can paint this?"}, {"start": 304.0, "end": 305.0, "text": " Almost nobody."}, {"start": 305.0, "end": 313.0, "text": " In fact, it is the description of Van Gogh's Starry Night without saying that we are looking for Starry Night."}, {"start": 313.0, "end": 316.0, "text": " I am itching to see this."}, {"start": 316.0, "end": 317.0, "text": " And..."}, {"start": 317.0, "end": 319.0, "text": " Oh wow."}, {"start": 319.0, "end": 321.0, "text": " All of them are lovely."}, {"start": 321.0, "end": 326.0, "text": " Now, we have two more really spectacular things about this paper."}, {"start": 326.0, "end": 333.0, "text": " One, we can witness how it learns to draw these beautiful images as we increase the model size,"}, {"start": 333.0, "end": 338.0, "text": " which roughly tells us how capable the AI is."}, {"start": 338.0, "end": 348.0, "text": " We can take a smaller model and ask it to create a kangaroo with sunglasses and a sign that says hello friends and we get this."}, {"start": 348.0, "end": 350.0, "text": " Well, that is a start."}, {"start": 350.0, "end": 354.0, "text": " It can't really write yet and the details are lacking."}, {"start": 354.0, "end": 364.0, "text": " But when we use the same architecture with the difference that we increase the model size to be about 50 times bigger, we get this."}, {"start": 364.0, "end": 366.0, "text": " Oh my goodness."}, {"start": 366.0, "end": 373.0, "text": " It not only learns to write, but the quality of the output is also leaps and bounds better."}, {"start": 373.0, "end": 378.0, "text": " What do you think what results would another 50 fold increase result in?"}, {"start": 378.0, "end": 381.0, "text": " That must be something truly incredible."}, {"start": 381.0, "end": 383.0, "text": " Let me know in the comments below."}, {"start": 383.0, "end": 393.0, "text": " Alongside the paper, scientists at Google also released a bunch of prompts as a benchmark for testing future image generator AI's."}, {"start": 393.0, "end": 404.0, "text": " And yes, there are some good ones in there, but if I may make some recommendations, I would love to see some of these prompts of mine in such a benchmark."}, {"start": 404.0, "end": 420.0, "text": " For instance, the Fox scientists, the scholars and the cyber frog are very well received by you fellow scholars and it would be super cool to be able to compare how new, more elaborate AI models are able to deal with these."}, {"start": 420.0, "end": 426.0, "text": " I also put a text version of these in the video description if someone is interested."}, {"start": 426.0, "end": 428.0, "text": " So it's official."}, {"start": 428.0, "end": 433.0, "text": " The age of beautiful AI generated images is now here."}, {"start": 433.0, "end": 435.0, "text": " Does this get your mind going?"}, {"start": 435.0, "end": 436.0, "text": " What do you think?"}, {"start": 436.0, "end": 438.0, "text": " Let me know in the comments below."}, {"start": 438.0, "end": 448.0, "text": " If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute."}, {"start": 448.0, "end": 451.0, "text": " No commitments or negotiation required."}, {"start": 451.0, "end": 468.0, "text": " Just sign up and launch an instance and hold on to your papers because with Lambda GPU cloud, you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 468.0, "end": 471.0, "text": " That's 73% savings."}, {"start": 471.0, "end": 485.0, "text": " Did I mention they also offer persistent storage, so join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers."}, {"start": 485.0, "end": 493.0, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 493.0, "end": 501.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=5LL6z1Ganbw
NVIDIA’s AI Plays Minecraft After 33 Years of Training! 🤖
❤️ If you wish to support us and watch these videos in early access, check this out: - https://www.patreon.com/TwoMinutePapers 📝 The paper "MineDojo - Building Open-Ended Embodied Agents with Internet-Scale Knowledge" is available here: https://minedojo.org/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-2019147/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 Minecraft 0:15 GANCraft 1:31 AI playing games 1:52 NVIDIA tries Minecraft 2:20 But how? 3:12 Can this really work? 3:32 Teaching an AI English 4:19 1 - Exploration 4:48 2 - Building a fence 5:03 3 - Getting a bucket of lava 5:18 4 - Building a portal 5:32 5 - Final boss time 6:02 Long time horizons 6:25 More results 7:09 Does this really work? Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #minecraft
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kano-Jolna Ifehir. Today we are going to see whether Nvidia's modern AI system they call MindOjo can learn to play and build things in Minecraft and more. In an earlier episode, we explored a previous project that goes by the name GANCREFT, which performed World to World Translation. What is that? Well, simple, we give an AI a very rough draft of a virtual world and out comes a much more detailed and beautiful one. This almost seems like science fiction. Really, just dream up something and it makes it happen. And it gets better, it creates water, it understands the concept of an island, and it creates a beautiful landscape, also with vegetation. In sanity, it even seems to have some concept of reflections, although they will need some extra work to get it perfectly right. It also supported interpolation, which means that we can create one landscape and as the AI to create a blend between different styles. We just look at the output animations and pick the one that we like best. And with this, finally, we also have some artistic control over the mood of the results. Absolutely amazing. But then, they thought, why stop there? For instance, DeepMind has a long history of using AI systems to master all kinds of games, from chess to go to StarCraft 2. OpenAI has a Dota 2 AI project that is able to challenge a world champion team. So, and video thought, why not try their hands at Minecraft? Minecraft is an open world game, one of the most played games in the world, which is kind of like a virtual sandbox. Here you can build things, explore, but really do whatever you wish. It's a sandbox after all. So, scientists at Nvidia thought, less train an AI to play this game and see what happens. Okay, but how? Well, most AI systems of today require lots and lots of training data. Wait a minute, there's no shortage of that out there on the internet, that's for sure. Let me explain. We can have this little AI sit down and watch hundreds of thousands of Minecraft tutorial videos on YouTube, 33 years of footage in total, my goodness. Then, have it read over 7000 wiki pages and then it shall become the ultimate Reddit lurker. Wow, that makes a ton of sense. Just think about it. It can learn practical knowledge from the tutorial videos and psychopedic knowledge from the wiki and it can learn what makes the best creations the best that are shared or read it. That sounds fantastic on paper, but there is so much to do and to learn about this game can an AI really get an understanding of all this? Well, hold on to your papers and let's see together what this AI could learn from all this data. But, wait a minute, this is a gamer AI that uses the controller to move around in a virtual world. But, how do we instruct it? Do we need to speak robot? If not, well, it doesn't even understand English text. What do we do with that? Well, remember, open AI's GPT3 which is a neural network model that has read almost the entirety of the internet. This one has proper English knowledge, so we plug that in and bam! Now you can read that wiki pdia and it gets better because we can now even give it text instructions. Now, let's see how well it fares through 5 of my favorite examples. One, we can ask it to explore an ocean monument. I love this one because the text description is sufficiently vague. How could the machine understand what exploring means? Well, humans do and this learns from humans as there must be tons of tutorial videos out there on exploration. And it seems to me that it was able to learn the concept correctly, loving it. Two, it can encircle these llamas with a fence. That is excellent. It understands what object it needs to use, where to move, where to look, and that not even a tiny gap is allowed. Very impressive. Three, this will be a more dangerous quest. Now scoop a bucket of lava. Oh my goodness. Be careful there. Do not fall. We are getting there and got it. Four, we can even ask it to build another portal. It builds the correct portal frame and did it use the correct materials? Does it work? Yes and yes. Good job little AI. And five, it is final boss time. Literally. Oh yes, someone was brave enough to ask the AI to fight an ender dragon, essentially the final boss of the game, and the AI was brave enough to try it. Well, apparently this is a limited example as it does not appear to be charging at the AI, but the AI seems to know what we are looking for and what fighting entails in this game. Starting this game from scratch and building up everything to defeat such a dragon takes a long time horizon and will be an excellent benchmark for the next AI one more paper down the line. I love it to perform this start to end make sure to subscribe if such a paper appears I will be here to show it to you. While we are looking at more examples of what it could do, I have to note that we have barely scratched the surface here. For instance, it understands text yes, but if we attach a speech recognition AI to this agent, we don't even need to write. It can essentially be a little virtual friend. How cool is that? What a time to be alive. Okay, so this is Mindodro, an AI agent that understands English and can learn to navigate these virtual worlds so well we can give it a task and it would execute it. And not just some simple ones, we are talking a wide variety of complex tasks. Now all this footage looks great, but how do we know if this is really performing these tasks correctly? Well, if you have been holding onto your papers now squeeze that paper because they had an experienced human evaluator look at these results and agreed with the AI solutions about 97% of the time. Wow, that is an incredible result. And don't forget Nvidia is amazing at democratizing these works and putting them into the hands of everyone and this one is an excellent example of that. If you have some programming knowledge, you can give it a try right now. So, does this get your mind going? What would you use this for? Let me know in the comments below. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Kano-Jolna Ifehir."}, {"start": 4.8, "end": 16.4, "text": " Today we are going to see whether Nvidia's modern AI system they call MindOjo can learn to play and build things in Minecraft and more."}, {"start": 16.4, "end": 25.6, "text": " In an earlier episode, we explored a previous project that goes by the name GANCREFT, which performed World to World Translation."}, {"start": 25.6, "end": 27.400000000000002, "text": " What is that?"}, {"start": 27.4, "end": 37.0, "text": " Well, simple, we give an AI a very rough draft of a virtual world and out comes a much more detailed and beautiful one."}, {"start": 37.0, "end": 40.2, "text": " This almost seems like science fiction."}, {"start": 40.2, "end": 44.4, "text": " Really, just dream up something and it makes it happen."}, {"start": 44.4, "end": 56.4, "text": " And it gets better, it creates water, it understands the concept of an island, and it creates a beautiful landscape, also with vegetation."}, {"start": 56.4, "end": 66.0, "text": " In sanity, it even seems to have some concept of reflections, although they will need some extra work to get it perfectly right."}, {"start": 66.0, "end": 76.4, "text": " It also supported interpolation, which means that we can create one landscape and as the AI to create a blend between different styles."}, {"start": 76.4, "end": 89.2, "text": " We just look at the output animations and pick the one that we like best. And with this, finally, we also have some artistic control over the mood of the results."}, {"start": 89.2, "end": 91.2, "text": " Absolutely amazing."}, {"start": 91.2, "end": 94.4, "text": " But then, they thought, why stop there?"}, {"start": 94.4, "end": 104.4, "text": " For instance, DeepMind has a long history of using AI systems to master all kinds of games, from chess to go to StarCraft 2."}, {"start": 104.4, "end": 111.4, "text": " OpenAI has a Dota 2 AI project that is able to challenge a world champion team."}, {"start": 111.4, "end": 116.4, "text": " So, and video thought, why not try their hands at Minecraft?"}, {"start": 116.4, "end": 125.4, "text": " Minecraft is an open world game, one of the most played games in the world, which is kind of like a virtual sandbox."}, {"start": 125.4, "end": 132.4, "text": " Here you can build things, explore, but really do whatever you wish. It's a sandbox after all."}, {"start": 132.4, "end": 140.4, "text": " So, scientists at Nvidia thought, less train an AI to play this game and see what happens."}, {"start": 140.4, "end": 143.4, "text": " Okay, but how?"}, {"start": 143.4, "end": 149.4, "text": " Well, most AI systems of today require lots and lots of training data."}, {"start": 149.4, "end": 155.4, "text": " Wait a minute, there's no shortage of that out there on the internet, that's for sure."}, {"start": 155.4, "end": 169.4, "text": " Let me explain. We can have this little AI sit down and watch hundreds of thousands of Minecraft tutorial videos on YouTube, 33 years of footage in total, my goodness."}, {"start": 169.4, "end": 177.4, "text": " Then, have it read over 7000 wiki pages and then it shall become the ultimate Reddit lurker."}, {"start": 177.4, "end": 181.4, "text": " Wow, that makes a ton of sense. Just think about it."}, {"start": 181.4, "end": 193.4, "text": " It can learn practical knowledge from the tutorial videos and psychopedic knowledge from the wiki and it can learn what makes the best creations the best that are shared or read it."}, {"start": 193.4, "end": 205.4, "text": " That sounds fantastic on paper, but there is so much to do and to learn about this game can an AI really get an understanding of all this?"}, {"start": 205.4, "end": 212.4, "text": " Well, hold on to your papers and let's see together what this AI could learn from all this data."}, {"start": 212.4, "end": 220.4, "text": " But, wait a minute, this is a gamer AI that uses the controller to move around in a virtual world."}, {"start": 220.4, "end": 228.4, "text": " But, how do we instruct it? Do we need to speak robot? If not, well, it doesn't even understand English text."}, {"start": 228.4, "end": 239.4, "text": " What do we do with that? Well, remember, open AI's GPT3 which is a neural network model that has read almost the entirety of the internet."}, {"start": 239.4, "end": 253.4, "text": " This one has proper English knowledge, so we plug that in and bam! Now you can read that wiki pdia and it gets better because we can now even give it text instructions."}, {"start": 253.4, "end": 258.4, "text": " Now, let's see how well it fares through 5 of my favorite examples."}, {"start": 258.4, "end": 267.4, "text": " One, we can ask it to explore an ocean monument. I love this one because the text description is sufficiently vague."}, {"start": 267.4, "end": 280.4, "text": " How could the machine understand what exploring means? Well, humans do and this learns from humans as there must be tons of tutorial videos out there on exploration."}, {"start": 280.4, "end": 286.4, "text": " And it seems to me that it was able to learn the concept correctly, loving it."}, {"start": 286.4, "end": 303.4, "text": " Two, it can encircle these llamas with a fence. That is excellent. It understands what object it needs to use, where to move, where to look, and that not even a tiny gap is allowed. Very impressive."}, {"start": 303.4, "end": 317.4, "text": " Three, this will be a more dangerous quest. Now scoop a bucket of lava. Oh my goodness. Be careful there. Do not fall. We are getting there and got it."}, {"start": 317.4, "end": 332.4, "text": " Four, we can even ask it to build another portal. It builds the correct portal frame and did it use the correct materials? Does it work? Yes and yes. Good job little AI."}, {"start": 332.4, "end": 350.4, "text": " And five, it is final boss time. Literally. Oh yes, someone was brave enough to ask the AI to fight an ender dragon, essentially the final boss of the game, and the AI was brave enough to try it."}, {"start": 350.4, "end": 363.4, "text": " Well, apparently this is a limited example as it does not appear to be charging at the AI, but the AI seems to know what we are looking for and what fighting entails in this game."}, {"start": 363.4, "end": 377.4, "text": " Starting this game from scratch and building up everything to defeat such a dragon takes a long time horizon and will be an excellent benchmark for the next AI one more paper down the line."}, {"start": 377.4, "end": 385.4, "text": " I love it to perform this start to end make sure to subscribe if such a paper appears I will be here to show it to you."}, {"start": 385.4, "end": 392.4, "text": " While we are looking at more examples of what it could do, I have to note that we have barely scratched the surface here."}, {"start": 392.4, "end": 409.4, "text": " For instance, it understands text yes, but if we attach a speech recognition AI to this agent, we don't even need to write. It can essentially be a little virtual friend. How cool is that? What a time to be alive."}, {"start": 409.4, "end": 423.4, "text": " Okay, so this is Mindodro, an AI agent that understands English and can learn to navigate these virtual worlds so well we can give it a task and it would execute it."}, {"start": 423.4, "end": 437.4, "text": " And not just some simple ones, we are talking a wide variety of complex tasks. Now all this footage looks great, but how do we know if this is really performing these tasks correctly?"}, {"start": 437.4, "end": 452.4, "text": " Well, if you have been holding onto your papers now squeeze that paper because they had an experienced human evaluator look at these results and agreed with the AI solutions about 97% of the time."}, {"start": 452.4, "end": 466.4, "text": " Wow, that is an incredible result. And don't forget Nvidia is amazing at democratizing these works and putting them into the hands of everyone and this one is an excellent example of that."}, {"start": 466.4, "end": 481.4, "text": " If you have some programming knowledge, you can give it a try right now. So, does this get your mind going? What would you use this for? Let me know in the comments below. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QqHchIFPE7g
NVIDIA GTC: When Simulation Becomes Reality! 🤯
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers If everything goes well, this will be my GTC talk: https://www.nvidia.com/gtc/session-catalog/?ncid=so-face-527732&tab.catalogallsessionstab=16566177511100015Kus#/session/16559245032830019Q6q ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see that the research papers that you see here in these videos are real. So real that a ton of them are already seeing use in NVIDIA's AI projects, some of which are so advanced, they seem to be straight out of a science fiction movie and even more of them are coming. We will talk about NVIDIA's GTC event and as always we will try to have a scholarly angle on it. For instance, in our Tesla AI-D recap video, we marveled at how Transformer Neural Networks, a paper from 2017 saw use just a couple years later in real self-driving cars deployed all around the world. That kind of tech transfer is extraordinary. So, around here we talk about a ton of amazing research papers and under almost every video I see some of you asking, okay, this all looks great, but when do we get to use it? That is a completely understandable question and finally, you'll get your answers today. And it gets better, we will also learn three lessons. Lesson 1 is that AI is everywhere today and it is improving incredibly fast. For instance, it can now train these virtual warriors from scratch. Or as many of us have these computers with weird camera placements, it can recreate our image such that it appears that we are holding eye contact. Yes, that's right, parts of this image are synthetic, parts are made on the fly by an AI and it is already almost impossible to notice. And this is forecast net. This is a physics model that can predict outlier weather events and it runs not in a data center anymore. No, no, it runs on just one and videographic card. That is fantastic, but lesson 2. AI is opening up completely new frontiers that we never even thought of. And now, hold on to your papers and check this out. They can now also create a virtual world where they can accurately simulate infrastructure projects down to the accuracy of a millimeter. When talking about digital copies of real things, this is a virtual assistant which understands English, the questions we ask it and let's be honest, today all that is expected. However, what is not expected is that it can synthesize and answer in a real person's voice. This person is Janssen Huang, the CEO of Nvidia, and it also animates its mouth and gestures accordingly. Synthetic biology is about designing biological systems at multiple levels from individual molecules up and all this in real time. Look at the face of this proud man, this is priceless. And we talked briefly about Tesla, but do you know that Nvidia is also making progress on self-driving cars? This is how it sees the world. Marvelous, but the self-driving part is nothing. Watch carefully because here it comes. Oh yes, lesson number three. Everything is connected. CD Assistant there, it sees and identifies the passenger, understands natural language, and it also understands which building is which, and what shows are played in them today. This is self-driving cars combined with the virtual assistant and look. Oh my, this is also combined with virtual copies of real worlds too. By the end of 2024, they expect to have a virtual video game version of all major North American highways plus Europe and Asia. What in the world? Wow. So, I hear you asking, what are all these virtual video games good for? Why do we need a video game world? Why not just use the real one? It's right here. Well, one of the answers is domain randomization. Here, we can re-enact real situations for self-driving cars and even create arbitrarily complex new situations and these AI's will be able to learn from them in a safe environment. You see, once again, everything is connected. And there is still so much more to talk about. My goodness. They also have the self-visualization system that can show us real cells splitting right in front of our eyes in real time. Not so long ago, such a simulation would have taken days and now it runs in real time. They can also create a digital version of real fulfillment centers. This virtual world can be simulated and optimized before the real one is changed. Does this mean that, yes, that is exactly right. Now, companies can find the optimal layout for their plans before making any physical investments. Kind of like playing the game factorial, but in real life. How cool is that? Their vision system can also look at conveyor belts and just their speed based on congestion. And once again, you see that everything is connected, the self-driving engine can also be put not only into a car, but into a little robot and thus it can now navigate those warehouses. So, whenever you see a new research paper and it does not immediately show how it could be used, please do not forget, everything is connected. They also have their Isaac gym system in which these robots can train safely before they are deployed into the real world. And these simulations can get insanely detailed and accurate. For instance, here the physics and connections of 5400 parts are simulated. And as a result, this virtual robot works exactly as the real one. This application would have been unfathomable just a few years ago. And yes, this has huge ramifications. For instance, in the future, whenever we have this robot called Animal, pass a test in a simulation, it is very likely that it will pass in the real world too. A simulation that is nearly the same as reality. Just think about that. What a time to be alive. And once again, applications like this require light transport simulations, cloud streaming, and collaboration with an AI assistant. And now you can even stream these from the cloud if you don't have a B-fig graphics card at home. And the results are absolutely amazing. Here you see what this place will look like at noon. Now, how about some trees? Nice. And maybe more variation. And we don't even need to be experts in 3D modeling, who just say what we wish to see to the virtual assistant. And there we go. We can also ask what does this look like at night? And it looks spectacular. All that is truly amazing. My eyes were popping out like these machines when seeing these results for the first time. And with all these remarkable results, we really just scratched the surface. There is so much going on. It is almost impossible to keep track of all these amazing projects. Here are some more for your enjoyment. So the papers that you see here in this series are real. As real as it gets. We can get from a research paper to a real product in just a couple of years. And everything that you saw here today is already in production or will be in production in the near future. By the way, if everything goes well, I will hold my own talk in this year's GTC as well and publish a video of it on this channel. Make sure to subscribe and hit the bell icon to not miss it. So what do you think? Does this get your mind going? What would you use this for? Let me know in the comments below. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold on to your papers because with Lambda GPU cloud, you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 10.8, "text": " Today we are going to see that the research papers that you see here in these videos are real."}, {"start": 10.8, "end": 16.88, "text": " So real that a ton of them are already seeing use in NVIDIA's AI projects,"}, {"start": 16.88, "end": 23.2, "text": " some of which are so advanced, they seem to be straight out of a science fiction movie"}, {"start": 23.2, "end": 26.0, "text": " and even more of them are coming."}, {"start": 26.0, "end": 34.0, "text": " We will talk about NVIDIA's GTC event and as always we will try to have a scholarly angle on it."}, {"start": 34.0, "end": 40.96, "text": " For instance, in our Tesla AI-D recap video, we marveled at how Transformer Neural Networks,"}, {"start": 40.96, "end": 50.8, "text": " a paper from 2017 saw use just a couple years later in real self-driving cars deployed all around the world."}, {"start": 50.8, "end": 54.0, "text": " That kind of tech transfer is extraordinary."}, {"start": 54.0, "end": 62.0, "text": " So, around here we talk about a ton of amazing research papers and under almost every video"}, {"start": 62.0, "end": 68.96000000000001, "text": " I see some of you asking, okay, this all looks great, but when do we get to use it?"}, {"start": 68.96000000000001, "end": 74.96000000000001, "text": " That is a completely understandable question and finally, you'll get your answers today."}, {"start": 74.96000000000001, "end": 79.03999999999999, "text": " And it gets better, we will also learn three lessons."}, {"start": 79.04, "end": 86.0, "text": " Lesson 1 is that AI is everywhere today and it is improving incredibly fast."}, {"start": 86.0, "end": 90.4, "text": " For instance, it can now train these virtual warriors from scratch."}, {"start": 90.4, "end": 94.80000000000001, "text": " Or as many of us have these computers with weird camera placements,"}, {"start": 94.80000000000001, "end": 101.36000000000001, "text": " it can recreate our image such that it appears that we are holding eye contact."}, {"start": 101.36000000000001, "end": 105.36000000000001, "text": " Yes, that's right, parts of this image are synthetic,"}, {"start": 105.36, "end": 112.4, "text": " parts are made on the fly by an AI and it is already almost impossible to notice."}, {"start": 112.4, "end": 114.56, "text": " And this is forecast net."}, {"start": 114.56, "end": 119.28, "text": " This is a physics model that can predict outlier weather events"}, {"start": 119.28, "end": 122.56, "text": " and it runs not in a data center anymore."}, {"start": 122.56, "end": 126.96000000000001, "text": " No, no, it runs on just one and videographic card."}, {"start": 126.96000000000001, "end": 130.64, "text": " That is fantastic, but lesson 2."}, {"start": 130.64, "end": 136.16, "text": " AI is opening up completely new frontiers that we never even thought of."}, {"start": 136.16, "end": 140.39999999999998, "text": " And now, hold on to your papers and check this out."}, {"start": 140.39999999999998, "end": 145.6, "text": " They can now also create a virtual world where they can accurately simulate"}, {"start": 145.6, "end": 150.07999999999998, "text": " infrastructure projects down to the accuracy of a millimeter."}, {"start": 150.07999999999998, "end": 153.27999999999997, "text": " When talking about digital copies of real things,"}, {"start": 153.27999999999997, "end": 157.04, "text": " this is a virtual assistant which understands English,"}, {"start": 157.04, "end": 162.79999999999998, "text": " the questions we ask it and let's be honest, today all that is expected."}, {"start": 163.44, "end": 168.56, "text": " However, what is not expected is that it can synthesize and answer"}, {"start": 168.56, "end": 170.56, "text": " in a real person's voice."}, {"start": 170.56, "end": 174.56, "text": " This person is Janssen Huang, the CEO of Nvidia,"}, {"start": 174.56, "end": 178.79999999999998, "text": " and it also animates its mouth and gestures accordingly."}, {"start": 179.35999999999999, "end": 183.6, "text": " Synthetic biology is about designing biological systems at multiple levels"}, {"start": 183.6, "end": 187.51999999999998, "text": " from individual molecules up and all this in real time."}, {"start": 188.4, "end": 192.48, "text": " Look at the face of this proud man, this is priceless."}, {"start": 192.48, "end": 198.24, "text": " And we talked briefly about Tesla, but do you know that Nvidia is also making"}, {"start": 198.24, "end": 200.56, "text": " progress on self-driving cars?"}, {"start": 200.56, "end": 202.32, "text": " This is how it sees the world."}, {"start": 203.35999999999999, "end": 206.95999999999998, "text": " Marvelous, but the self-driving part is nothing."}, {"start": 207.6, "end": 210.24, "text": " Watch carefully because here it comes."}, {"start": 210.24, "end": 213.36, "text": " Oh yes, lesson number three."}, {"start": 213.92000000000002, "end": 215.68, "text": " Everything is connected."}, {"start": 215.68, "end": 220.24, "text": " CD Assistant there, it sees and identifies the passenger,"}, {"start": 220.88, "end": 226.96, "text": " understands natural language, and it also understands which building is which,"}, {"start": 226.96, "end": 229.52, "text": " and what shows are played in them today."}, {"start": 230.0, "end": 235.84, "text": " This is self-driving cars combined with the virtual assistant and look."}, {"start": 235.84, "end": 241.76, "text": " Oh my, this is also combined with virtual copies of real worlds too."}, {"start": 241.76, "end": 250.56, "text": " By the end of 2024, they expect to have a virtual video game version of all major North American highways"}, {"start": 250.56, "end": 252.96, "text": " plus Europe and Asia."}, {"start": 252.96, "end": 254.88, "text": " What in the world?"}, {"start": 254.88, "end": 255.76, "text": " Wow."}, {"start": 255.76, "end": 261.2, "text": " So, I hear you asking, what are all these virtual video games good for?"}, {"start": 261.2, "end": 263.6, "text": " Why do we need a video game world?"}, {"start": 263.6, "end": 265.76, "text": " Why not just use the real one?"}, {"start": 265.76, "end": 266.88, "text": " It's right here."}, {"start": 266.88, "end": 270.64, "text": " Well, one of the answers is domain randomization."}, {"start": 270.64, "end": 278.88, "text": " Here, we can re-enact real situations for self-driving cars and even create arbitrarily complex"}, {"start": 278.88, "end": 284.71999999999997, "text": " new situations and these AI's will be able to learn from them in a safe environment."}, {"start": 284.71999999999997, "end": 288.15999999999997, "text": " You see, once again, everything is connected."}, {"start": 288.15999999999997, "end": 291.52, "text": " And there is still so much more to talk about."}, {"start": 291.52, "end": 292.24, "text": " My goodness."}, {"start": 292.24, "end": 301.84000000000003, "text": " They also have the self-visualization system that can show us real cells splitting right in front of our eyes in real time."}, {"start": 301.84000000000003, "end": 309.28000000000003, "text": " Not so long ago, such a simulation would have taken days and now it runs in real time."}, {"start": 309.28000000000003, "end": 314.0, "text": " They can also create a digital version of real fulfillment centers."}, {"start": 314.0, "end": 320.08, "text": " This virtual world can be simulated and optimized before the real one is changed."}, {"start": 320.08, "end": 324.0, "text": " Does this mean that, yes, that is exactly right."}, {"start": 324.0, "end": 331.12, "text": " Now, companies can find the optimal layout for their plans before making any physical investments."}, {"start": 331.12, "end": 335.28, "text": " Kind of like playing the game factorial, but in real life."}, {"start": 335.28, "end": 337.03999999999996, "text": " How cool is that?"}, {"start": 337.03999999999996, "end": 343.91999999999996, "text": " Their vision system can also look at conveyor belts and just their speed based on congestion."}, {"start": 343.92, "end": 352.56, "text": " And once again, you see that everything is connected, the self-driving engine can also be put not only into a car,"}, {"start": 352.56, "end": 358.40000000000003, "text": " but into a little robot and thus it can now navigate those warehouses."}, {"start": 358.40000000000003, "end": 365.52000000000004, "text": " So, whenever you see a new research paper and it does not immediately show how it could be used,"}, {"start": 365.52000000000004, "end": 368.8, "text": " please do not forget, everything is connected."}, {"start": 368.8, "end": 377.2, "text": " They also have their Isaac gym system in which these robots can train safely before they are deployed into the real world."}, {"start": 377.2, "end": 382.0, "text": " And these simulations can get insanely detailed and accurate."}, {"start": 382.0, "end": 388.72, "text": " For instance, here the physics and connections of 5400 parts are simulated."}, {"start": 388.72, "end": 394.40000000000003, "text": " And as a result, this virtual robot works exactly as the real one."}, {"start": 394.4, "end": 399.28, "text": " This application would have been unfathomable just a few years ago."}, {"start": 399.28, "end": 403.03999999999996, "text": " And yes, this has huge ramifications."}, {"start": 403.03999999999996, "end": 410.23999999999995, "text": " For instance, in the future, whenever we have this robot called Animal, pass a test in a simulation,"}, {"start": 410.23999999999995, "end": 414.23999999999995, "text": " it is very likely that it will pass in the real world too."}, {"start": 414.23999999999995, "end": 418.23999999999995, "text": " A simulation that is nearly the same as reality."}, {"start": 418.23999999999995, "end": 419.76, "text": " Just think about that."}, {"start": 419.76, "end": 421.76, "text": " What a time to be alive."}, {"start": 421.76, "end": 427.12, "text": " And once again, applications like this require light transport simulations,"}, {"start": 427.12, "end": 432.0, "text": " cloud streaming, and collaboration with an AI assistant."}, {"start": 432.0, "end": 437.84, "text": " And now you can even stream these from the cloud if you don't have a B-fig graphics card at home."}, {"start": 437.84, "end": 441.2, "text": " And the results are absolutely amazing."}, {"start": 441.2, "end": 444.96, "text": " Here you see what this place will look like at noon."}, {"start": 444.96, "end": 447.84, "text": " Now, how about some trees?"}, {"start": 447.84, "end": 448.8, "text": " Nice."}, {"start": 448.8, "end": 451.59999999999997, "text": " And maybe more variation."}, {"start": 451.6, "end": 455.52000000000004, "text": " And we don't even need to be experts in 3D modeling,"}, {"start": 455.52000000000004, "end": 458.96000000000004, "text": " who just say what we wish to see to the virtual assistant."}, {"start": 458.96000000000004, "end": 460.56, "text": " And there we go."}, {"start": 460.56, "end": 463.92, "text": " We can also ask what does this look like at night?"}, {"start": 463.92, "end": 466.56, "text": " And it looks spectacular."}, {"start": 466.56, "end": 469.12, "text": " All that is truly amazing."}, {"start": 469.12, "end": 472.08000000000004, "text": " My eyes were popping out like these machines"}, {"start": 472.08000000000004, "end": 474.88, "text": " when seeing these results for the first time."}, {"start": 474.88, "end": 477.44, "text": " And with all these remarkable results,"}, {"start": 477.44, "end": 479.76000000000005, "text": " we really just scratched the surface."}, {"start": 479.76, "end": 481.59999999999997, "text": " There is so much going on."}, {"start": 481.59999999999997, "end": 486.4, "text": " It is almost impossible to keep track of all these amazing projects."}, {"start": 486.4, "end": 489.03999999999996, "text": " Here are some more for your enjoyment."}, {"start": 489.03999999999996, "end": 493.12, "text": " So the papers that you see here in this series are real."}, {"start": 493.12, "end": 494.88, "text": " As real as it gets."}, {"start": 494.88, "end": 500.56, "text": " We can get from a research paper to a real product in just a couple of years."}, {"start": 500.56, "end": 504.71999999999997, "text": " And everything that you saw here today is already in production"}, {"start": 504.71999999999997, "end": 508.71999999999997, "text": " or will be in production in the near future."}, {"start": 508.72, "end": 510.88000000000005, "text": " By the way, if everything goes well,"}, {"start": 510.88000000000005, "end": 515.0400000000001, "text": " I will hold my own talk in this year's GTC as well"}, {"start": 515.0400000000001, "end": 518.0, "text": " and publish a video of it on this channel."}, {"start": 518.0, "end": 521.6800000000001, "text": " Make sure to subscribe and hit the bell icon to not miss it."}, {"start": 521.6800000000001, "end": 523.44, "text": " So what do you think?"}, {"start": 523.44, "end": 525.2, "text": " Does this get your mind going?"}, {"start": 525.2, "end": 527.0400000000001, "text": " What would you use this for?"}, {"start": 527.0400000000001, "end": 528.88, "text": " Let me know in the comments below."}, {"start": 528.88, "end": 533.12, "text": " If you're looking for inexpensive cloud GPUs for AI,"}, {"start": 533.12, "end": 536.96, "text": " Lambda now offers the best prices in the world"}, {"start": 536.96, "end": 539.36, "text": " for GPU cloud compute."}, {"start": 539.36, "end": 542.1600000000001, "text": " No commitments or negotiation required."}, {"start": 542.1600000000001, "end": 545.2, "text": " Just sign up and launch an instance."}, {"start": 545.2, "end": 549.6, "text": " And hold on to your papers because with Lambda GPU cloud,"}, {"start": 549.6, "end": 553.2, "text": " you can get on-demand A100 instances"}, {"start": 553.2, "end": 559.6800000000001, "text": " for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 559.6800000000001, "end": 562.48, "text": " That's 73% savings."}, {"start": 562.48, "end": 565.9200000000001, "text": " Did I mention they also offer persistent storage?"}, {"start": 565.92, "end": 571.8399999999999, "text": " So join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 571.8399999999999, "end": 576.4, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 576.4, "end": 580.24, "text": " Make sure to go to LambdaLabs.com slash papers"}, {"start": 580.24, "end": 584.4, "text": " to sign up for one of their amazing GPU instances today."}, {"start": 584.4, "end": 586.64, "text": " Thanks for watching and for your generous support."}, {"start": 586.64, "end": 596.64, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fVrcBY0lOWw
Finally, Robotic Telekinesis is Here! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (thank you Soumik!): http://wandb.me/robotic-telekinesis 📝 The paper "Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube" is available here: https://robotic-telekinesis.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today, we are going to perform something that seems like a true miracle. Oh yes, you are seeing it correctly. This is robotic telekinesis, moving objects from afar. So, what is going on here? Well, we take a human operator who performs these gestures, which are then transferred to a robot arm and then the magic happens. This is unbelievable. So now, hold on to your papers and let's see how well it can pull off all this. Level 1. Oh my, it comes out, guns blazing. Look at that. It is not a brute, not at all. With delicate movements, it can pick up these plush toys or even rotate a box. That is a fantastic start. But it can do even better. Level 2. Let's try to pick up those scissors. That is going to require some dexterous hand movements from the operator and... Wow! Can you believe that, by the time I am seeing this, it is already done. And it can also stack these cups, which is a difficult matter as its own finger might get in the way. This was a little more of a close call, but it still managed. Bravo! So, with all these amazing tasks, what could possibly be level 3? Well, check this out. Yes, we are going to attempt to open this drawer. That is quite a challenge. Now note that it is slightly a drawer to make sure the task is not too hard for its fingers and let's see. Oh yeah! Good job, little robot. But what good is opening a drawer if we aren't doing anything with it? So here comes plus 1, the final boss level. Open the drawer and pick up the cup. There is no way that this is possible through telekinesis. And oh my goodness, I love it. And note that this human has done extremely well with the hand motions, but is this person a robot whisperer or can anyone pull this off? That is what I would like to know. And wow! This works with other operators too, which is good news because this means that it is relatively easy and intuitive to use. So much so and get this that these people are completely untrained operators. So cool! So now is the part where we expect the bad news to come. Where is the catch? Maybe we need some sinfully expensive camera gear to look at our hand to make this happen, right? Well, if you have been holding onto your paper so far, now squeeze those papers because we don't need to buy anything crazy at all. What we need is just one uncalibrated color camera. Today this is available in almost every single household. How cool is that? So if it can do that, all we need is a bunch of training footage, right? But wait a second. Are you thinking what I am thinking? If we need just one uncalibrated color camera, can it be that yes? That is exactly right. We don't need to create any training data at all. We can just use YouTube videos that already exist out there in the wild. In a previous paper, scientists at DeepMind horned the power of YouTube by having their AI watch humans play games and then they would ask the AI to solve hard exploration games and it just ripped through these levels in Montazuma's revenge and other games too. And here comes the best part, what was even more surprising there is that it didn't just perform an imitation of the teacher. No, no, it even outperformed its human teacher. Wow, I wonder if we could somehow do a variant of that in a future paper. Imagine what good we could do with that. Virtual surgeries, a surgeon could perform a life-saving operation on anyone from anywhere else in the world. Wow, what a time to be alive. Now for that, the success rate needs to be much closer to 100% here, but still, this is incredible progress in AI research. And of course, you are an experienced fellow scholar, so you don't forget to apply the first law of papers here which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. And two more papers down the line, I bet this will not only be significantly more accurate, but I would dare to say that even full body retargeting will be possible. Yes, we could move around and have a real robot replicate our movements. Just let the computer graphics people in with their motion capture knowledge and this might really become a real thing soon. So does this get your mind going? What would you use this for? Let me know in the comments below. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Weight and biases provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that weights and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir."}, {"start": 4.76, "end": 10.0, "text": " Today, we are going to perform something that seems like a true miracle."}, {"start": 10.0, "end": 13.0, "text": " Oh yes, you are seeing it correctly."}, {"start": 13.0, "end": 18.6, "text": " This is robotic telekinesis, moving objects from afar."}, {"start": 18.6, "end": 21.1, "text": " So, what is going on here?"}, {"start": 21.1, "end": 25.8, "text": " Well, we take a human operator who performs these gestures,"}, {"start": 25.8, "end": 31.8, "text": " which are then transferred to a robot arm and then the magic happens."}, {"start": 31.8, "end": 34.2, "text": " This is unbelievable."}, {"start": 34.2, "end": 40.3, "text": " So now, hold on to your papers and let's see how well it can pull off all this."}, {"start": 40.3, "end": 41.8, "text": " Level 1."}, {"start": 41.8, "end": 45.2, "text": " Oh my, it comes out, guns blazing."}, {"start": 45.2, "end": 46.8, "text": " Look at that."}, {"start": 46.8, "end": 49.400000000000006, "text": " It is not a brute, not at all."}, {"start": 49.4, "end": 56.4, "text": " With delicate movements, it can pick up these plush toys or even rotate a box."}, {"start": 56.4, "end": 58.8, "text": " That is a fantastic start."}, {"start": 58.8, "end": 61.8, "text": " But it can do even better."}, {"start": 61.8, "end": 63.0, "text": " Level 2."}, {"start": 63.0, "end": 65.8, "text": " Let's try to pick up those scissors."}, {"start": 65.8, "end": 71.0, "text": " That is going to require some dexterous hand movements from the operator and..."}, {"start": 71.0, "end": 72.2, "text": " Wow!"}, {"start": 72.2, "end": 78.1, "text": " Can you believe that, by the time I am seeing this, it is already done."}, {"start": 78.1, "end": 84.19999999999999, "text": " And it can also stack these cups, which is a difficult matter as its own finger might"}, {"start": 84.19999999999999, "end": 85.89999999999999, "text": " get in the way."}, {"start": 85.89999999999999, "end": 90.6, "text": " This was a little more of a close call, but it still managed."}, {"start": 90.6, "end": 91.6, "text": " Bravo!"}, {"start": 91.6, "end": 97.6, "text": " So, with all these amazing tasks, what could possibly be level 3?"}, {"start": 97.6, "end": 99.6, "text": " Well, check this out."}, {"start": 99.6, "end": 103.6, "text": " Yes, we are going to attempt to open this drawer."}, {"start": 103.6, "end": 105.6, "text": " That is quite a challenge."}, {"start": 105.6, "end": 111.5, "text": " Now note that it is slightly a drawer to make sure the task is not too hard for its fingers"}, {"start": 111.5, "end": 113.6, "text": " and let's see."}, {"start": 113.6, "end": 115.39999999999999, "text": " Oh yeah!"}, {"start": 115.39999999999999, "end": 117.39999999999999, "text": " Good job, little robot."}, {"start": 117.39999999999999, "end": 122.19999999999999, "text": " But what good is opening a drawer if we aren't doing anything with it?"}, {"start": 122.19999999999999, "end": 126.8, "text": " So here comes plus 1, the final boss level."}, {"start": 126.8, "end": 130.2, "text": " Open the drawer and pick up the cup."}, {"start": 130.2, "end": 134.9, "text": " There is no way that this is possible through telekinesis."}, {"start": 134.9, "end": 139.1, "text": " And oh my goodness, I love it."}, {"start": 139.1, "end": 145.5, "text": " And note that this human has done extremely well with the hand motions, but is this person"}, {"start": 145.5, "end": 150.0, "text": " a robot whisperer or can anyone pull this off?"}, {"start": 150.0, "end": 152.1, "text": " That is what I would like to know."}, {"start": 152.1, "end": 153.70000000000002, "text": " And wow!"}, {"start": 153.70000000000002, "end": 159.22, "text": " This works with other operators too, which is good news because this means that it is"}, {"start": 159.22, "end": 162.9, "text": " relatively easy and intuitive to use."}, {"start": 162.9, "end": 168.9, "text": " So much so and get this that these people are completely untrained operators."}, {"start": 168.9, "end": 170.46, "text": " So cool!"}, {"start": 170.46, "end": 174.6, "text": " So now is the part where we expect the bad news to come."}, {"start": 174.6, "end": 176.1, "text": " Where is the catch?"}, {"start": 176.1, "end": 181.70000000000002, "text": " Maybe we need some sinfully expensive camera gear to look at our hand to make this happen,"}, {"start": 181.70000000000002, "end": 182.70000000000002, "text": " right?"}, {"start": 182.70000000000002, "end": 188.5, "text": " Well, if you have been holding onto your paper so far, now squeeze those papers because"}, {"start": 188.5, "end": 192.26, "text": " we don't need to buy anything crazy at all."}, {"start": 192.26, "end": 196.26, "text": " What we need is just one uncalibrated color camera."}, {"start": 196.26, "end": 200.85999999999999, "text": " Today this is available in almost every single household."}, {"start": 200.85999999999999, "end": 202.7, "text": " How cool is that?"}, {"start": 202.7, "end": 208.45999999999998, "text": " So if it can do that, all we need is a bunch of training footage, right?"}, {"start": 208.45999999999998, "end": 210.54, "text": " But wait a second."}, {"start": 210.54, "end": 213.26, "text": " Are you thinking what I am thinking?"}, {"start": 213.26, "end": 218.94, "text": " If we need just one uncalibrated color camera, can it be that yes?"}, {"start": 218.94, "end": 220.73999999999998, "text": " That is exactly right."}, {"start": 220.74, "end": 224.10000000000002, "text": " We don't need to create any training data at all."}, {"start": 224.10000000000002, "end": 229.5, "text": " We can just use YouTube videos that already exist out there in the wild."}, {"start": 229.5, "end": 235.82000000000002, "text": " In a previous paper, scientists at DeepMind horned the power of YouTube by having their AI"}, {"start": 235.82000000000002, "end": 242.66000000000003, "text": " watch humans play games and then they would ask the AI to solve hard exploration games"}, {"start": 242.66000000000003, "end": 248.94, "text": " and it just ripped through these levels in Montazuma's revenge and other games too."}, {"start": 248.94, "end": 254.02, "text": " And here comes the best part, what was even more surprising there is that it didn't just"}, {"start": 254.02, "end": 257.02, "text": " perform an imitation of the teacher."}, {"start": 257.02, "end": 261.46, "text": " No, no, it even outperformed its human teacher."}, {"start": 261.46, "end": 268.14, "text": " Wow, I wonder if we could somehow do a variant of that in a future paper."}, {"start": 268.14, "end": 270.58, "text": " Imagine what good we could do with that."}, {"start": 270.58, "end": 276.9, "text": " Virtual surgeries, a surgeon could perform a life-saving operation on anyone from anywhere"}, {"start": 276.9, "end": 278.62, "text": " else in the world."}, {"start": 278.62, "end": 281.3, "text": " Wow, what a time to be alive."}, {"start": 281.3, "end": 288.14, "text": " Now for that, the success rate needs to be much closer to 100% here, but still, this"}, {"start": 288.14, "end": 291.74, "text": " is incredible progress in AI research."}, {"start": 291.74, "end": 297.42, "text": " And of course, you are an experienced fellow scholar, so you don't forget to apply the"}, {"start": 297.42, "end": 302.02, "text": " first law of papers here which says that research is a process."}, {"start": 302.02, "end": 307.7, "text": " Do not look at where we are, look at where we will be, two more papers down the line."}, {"start": 307.7, "end": 313.86, "text": " And two more papers down the line, I bet this will not only be significantly more accurate,"}, {"start": 313.86, "end": 319.42, "text": " but I would dare to say that even full body retargeting will be possible."}, {"start": 319.42, "end": 325.46, "text": " Yes, we could move around and have a real robot replicate our movements."}, {"start": 325.46, "end": 329.41999999999996, "text": " Just let the computer graphics people in with their motion capture knowledge and this"}, {"start": 329.41999999999996, "end": 332.46, "text": " might really become a real thing soon."}, {"start": 332.46, "end": 334.86, "text": " So does this get your mind going?"}, {"start": 334.86, "end": 336.7, "text": " What would you use this for?"}, {"start": 336.7, "end": 338.53999999999996, "text": " Let me know in the comments below."}, {"start": 338.53999999999996, "end": 343.21999999999997, "text": " What you see here is a report of this exact paper we have talked about which was made by"}, {"start": 343.21999999999997, "end": 344.74, "text": " weights and biases."}, {"start": 344.74, "end": 346.94, "text": " I put a link to it in the description."}, {"start": 346.94, "end": 347.94, "text": " Make sure to have a look."}, {"start": 347.94, "end": 351.5, "text": " I think it helps you understand this paper better."}, {"start": 351.5, "end": 356.82, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 356.82, "end": 361.94, "text": " Using their system, you can create beautiful reports like this one to explain your findings"}, {"start": 361.94, "end": 363.94, "text": " to your colleagues better."}, {"start": 363.94, "end": 370.86, "text": " It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more."}, {"start": 370.86, "end": 376.94, "text": " And the best part is that weights and biases is free for all individuals, academics and"}, {"start": 376.94, "end": 378.7, "text": " open source projects."}, {"start": 378.7, "end": 384.9, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 384.9, "end": 388.62, "text": " description and you can get a free demo today."}, {"start": 388.62, "end": 399.18, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1kV-rZZw50Q
NVIDIA’s New AI Trained For 10 Years! But How? 🤺
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (thank you Soumik!): http://wandb.me/ASE 📝 The paper "ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters" is available here: https://nv-tlabs.github.io/ASE/ 📝 Our material synthesis paper with the latent space is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Chapters: 0:00 10 Years of training? 0:42 After 1 week 1:23 After 4 months 1:31 After 2 years 1:55 After 10 years! 2:25 How did they train for 10 years? 3:01 1. Latent spaces 3:52 2. Robust recovery 4:35 3. The controls are 👌 5:01 4. Adversaries 5:57 A great life lesson 6:15 The Third Law of Papers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fahir. Today, you will see an absolute banger paper. This is about how amazingly, and videos virtual characters can move around after they have trained for 10 years. 10 years? We don't have 10 years for a project? Well, luckily we don't have to wait for 10 years. Why is that? I will tell you exactly why in a moment. But believe me, these folks are not natural born warriors. They are AI agents that have to train for a long, long time to become so good. So, how does that work? Well, first, our candidates are fed a bunch of basic motions, and then are dropped into Nvidia's Isaac, which is a virtual gym where they can hone their skills. But, unfortunately, they have none. After a week of training, I expected that they would showcase some amazingly athletic warrior moves, but instead we got this. Oh my goodness. And well, let's be optimistic and say that they are practicing judo, where the first lesson is learning how to fall. Yes, let's say that. Then, after two months, oh, we can witness some improvement. Well, now they are not falling, and they can do some basic movement. But they still look like constipated warriors. After two years, we are starting to see something that resembles true fight moves. These are not there yet, but they have improved a great deal. Except this chap. This chap goes like, Sir, I've been training for two years. I've had enough, and now I shall leave. In style. I wonder what these will look like in eight more years of training. Well, hold onto your papers and let's see together. Oh my, that is absolutely amazing. Now that's what I call a bunch of real fighters. See, time is the answer. It even made our stylish chap take his training seriously. So, which one is your favorite from here? Did you find some interesting movements? Let me know in the comments below. Now, I promise that we will talk about the ten-year thing. So, these scientists at Nvidia start this paper in 2012. Well, not quite. This is ten years of training, but in a virtual world. However, a real-world computer simulates this virtual world, and it can do it much quicker than that. How much quicker? Well, the powerful machine can simulate these ten years, not in ten years, but in ten days. Oh, yes. Now that sounds much better. And we are not done yet, not even close. When reading this paper, I was so happy to find out that this new technique also has four more amazing features. One, it works with latent spaces. What is that? A latent space is a meta place where similar kinds of data are laid out to be close to each other. In our earlier paper, we used such a space to create beautiful virtual materials for virtual worlds. Nvidia here uses a latent space to switch between the motion types that the character now knows, and not only that, but the AI also learned how to weave these motions together, even if they were not combined together in the training data. That is incredible. Two, this is my favorite. It has to be. They not only learn to fall, but in those ten years, they also had plenty of opportunity to learn to get up. Do you know what this means? Of course, this means the favorite pastime of the computer graphics researcher, and there is throwing boxes at virtual characters. We like to say that we are testing whether the character can recover from random perturbations. That sounds a little more scientific. And these AI agents are passing with flying colors, or flying boxes, if you will. Wow! Three, also the controls are excellent. Look, this really has some amazing potential to be used in virtual worlds, because we can even have the character face one way, and move into a different direction at the same time. More detailed poses can also be specified. And what's more, with this, we can even enter a virtual environment and strike down these evil pillars with precision. Loving it. Four, these motions are synthesized adversarially. This means that we have a generator neural network creating these new kinds of motions. But we connected to another neural network called the discriminator that watches it and ensures that the generated motions are similar to the ones in the data set and seem real too. And as they battle each other, they also improve together, and in the end, we take only the motion types that are good enough to fall the discriminator. Hopefully, these are good enough to fall the human eye too. And as you see, the results speak for themselves. If we wouldn't be doing it this way, here is what we would get if we trained these agents from scratch. And yes, while we are talking about training, this did not start out well at all. Imagine if scientists at Nvidia quit after just one week of training, which is about 30 minutes in real time. These results are not too promising, are they? But they still kept going. And the result was this. That is excellent life advice right there, and also an excellent opportunity for us to invoke the third law of papers. Not the first, the third one. This says that a bad researcher fails 100% of the time, while a good one only fails 99% of the time. Hence, what you see here is always just 1% of the work that was done. And all this is done by Nvidia, so I am sure that we will see this deployed in real-world projects where these amazing agents will get democratized by putting it into the hands of all of us. What a time to be alive! So, as this gets your mind going, what would you use this for? Let me know in the comments below. What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look, I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wades and Biasis is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching it for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fahir."}, {"start": 5.0, "end": 9.0, "text": " Today, you will see an absolute banger paper."}, {"start": 9.0, "end": 18.0, "text": " This is about how amazingly, and videos virtual characters can move around after they have trained for 10 years."}, {"start": 18.0, "end": 25.0, "text": " 10 years? We don't have 10 years for a project? Well, luckily we don't have to wait for 10 years."}, {"start": 25.0, "end": 34.0, "text": " Why is that? I will tell you exactly why in a moment. But believe me, these folks are not natural born warriors."}, {"start": 34.0, "end": 42.0, "text": " They are AI agents that have to train for a long, long time to become so good. So, how does that work?"}, {"start": 42.0, "end": 50.0, "text": " Well, first, our candidates are fed a bunch of basic motions, and then are dropped into Nvidia's Isaac,"}, {"start": 50.0, "end": 57.0, "text": " which is a virtual gym where they can hone their skills. But, unfortunately, they have none."}, {"start": 57.0, "end": 64.0, "text": " After a week of training, I expected that they would showcase some amazingly athletic warrior moves,"}, {"start": 64.0, "end": 69.0, "text": " but instead we got this. Oh my goodness."}, {"start": 69.0, "end": 77.0, "text": " And well, let's be optimistic and say that they are practicing judo, where the first lesson is learning how to fall."}, {"start": 77.0, "end": 85.0, "text": " Yes, let's say that. Then, after two months, oh, we can witness some improvement."}, {"start": 85.0, "end": 94.0, "text": " Well, now they are not falling, and they can do some basic movement. But they still look like constipated warriors."}, {"start": 94.0, "end": 102.0, "text": " After two years, we are starting to see something that resembles true fight moves. These are not there yet,"}, {"start": 102.0, "end": 108.0, "text": " but they have improved a great deal. Except this chap. This chap goes like,"}, {"start": 108.0, "end": 115.0, "text": " Sir, I've been training for two years. I've had enough, and now I shall leave. In style."}, {"start": 115.0, "end": 124.0, "text": " I wonder what these will look like in eight more years of training. Well, hold onto your papers and let's see together."}, {"start": 124.0, "end": 134.0, "text": " Oh my, that is absolutely amazing. Now that's what I call a bunch of real fighters. See, time is the answer."}, {"start": 134.0, "end": 139.0, "text": " It even made our stylish chap take his training seriously."}, {"start": 139.0, "end": 146.0, "text": " So, which one is your favorite from here? Did you find some interesting movements? Let me know in the comments below."}, {"start": 146.0, "end": 157.0, "text": " Now, I promise that we will talk about the ten-year thing. So, these scientists at Nvidia start this paper in 2012. Well, not quite."}, {"start": 157.0, "end": 166.0, "text": " This is ten years of training, but in a virtual world. However, a real-world computer simulates this virtual world,"}, {"start": 166.0, "end": 175.0, "text": " and it can do it much quicker than that. How much quicker? Well, the powerful machine can simulate these ten years,"}, {"start": 175.0, "end": 186.0, "text": " not in ten years, but in ten days. Oh, yes. Now that sounds much better. And we are not done yet, not even close."}, {"start": 186.0, "end": 194.0, "text": " When reading this paper, I was so happy to find out that this new technique also has four more amazing features."}, {"start": 194.0, "end": 206.0, "text": " One, it works with latent spaces. What is that? A latent space is a meta place where similar kinds of data are laid out to be close to each other."}, {"start": 206.0, "end": 214.0, "text": " In our earlier paper, we used such a space to create beautiful virtual materials for virtual worlds."}, {"start": 214.0, "end": 232.0, "text": " Nvidia here uses a latent space to switch between the motion types that the character now knows, and not only that, but the AI also learned how to weave these motions together, even if they were not combined together in the training data."}, {"start": 232.0, "end": 240.0, "text": " That is incredible. Two, this is my favorite. It has to be. They not only learn to fall,"}, {"start": 240.0, "end": 253.0, "text": " but in those ten years, they also had plenty of opportunity to learn to get up. Do you know what this means? Of course, this means the favorite pastime of the computer graphics researcher,"}, {"start": 253.0, "end": 266.0, "text": " and there is throwing boxes at virtual characters. We like to say that we are testing whether the character can recover from random perturbations. That sounds a little more scientific."}, {"start": 266.0, "end": 274.0, "text": " And these AI agents are passing with flying colors, or flying boxes, if you will. Wow!"}, {"start": 274.0, "end": 291.0, "text": " Three, also the controls are excellent. Look, this really has some amazing potential to be used in virtual worlds, because we can even have the character face one way, and move into a different direction at the same time."}, {"start": 291.0, "end": 304.0, "text": " More detailed poses can also be specified. And what's more, with this, we can even enter a virtual environment and strike down these evil pillars with precision."}, {"start": 304.0, "end": 315.0, "text": " Loving it. Four, these motions are synthesized adversarially. This means that we have a generator neural network creating these new kinds of motions."}, {"start": 315.0, "end": 328.0, "text": " But we connected to another neural network called the discriminator that watches it and ensures that the generated motions are similar to the ones in the data set and seem real too."}, {"start": 328.0, "end": 339.0, "text": " And as they battle each other, they also improve together, and in the end, we take only the motion types that are good enough to fall the discriminator."}, {"start": 339.0, "end": 347.0, "text": " Hopefully, these are good enough to fall the human eye too. And as you see, the results speak for themselves."}, {"start": 347.0, "end": 359.0, "text": " If we wouldn't be doing it this way, here is what we would get if we trained these agents from scratch. And yes, while we are talking about training, this did not start out well at all."}, {"start": 359.0, "end": 371.0, "text": " Imagine if scientists at Nvidia quit after just one week of training, which is about 30 minutes in real time. These results are not too promising, are they?"}, {"start": 371.0, "end": 385.0, "text": " But they still kept going. And the result was this. That is excellent life advice right there, and also an excellent opportunity for us to invoke the third law of papers."}, {"start": 385.0, "end": 397.0, "text": " Not the first, the third one. This says that a bad researcher fails 100% of the time, while a good one only fails 99% of the time."}, {"start": 397.0, "end": 403.0, "text": " Hence, what you see here is always just 1% of the work that was done."}, {"start": 403.0, "end": 416.0, "text": " And all this is done by Nvidia, so I am sure that we will see this deployed in real-world projects where these amazing agents will get democratized by putting it into the hands of all of us."}, {"start": 416.0, "end": 418.0, "text": " What a time to be alive!"}, {"start": 418.0, "end": 424.0, "text": " So, as this gets your mind going, what would you use this for? Let me know in the comments below."}, {"start": 424.0, "end": 436.0, "text": " What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look, I think it helps you understand this paper better."}, {"start": 436.0, "end": 449.0, "text": " Wades and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better."}, {"start": 449.0, "end": 464.0, "text": " It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wades and Biasis is free for all individuals, academics, and open source projects."}, {"start": 464.0, "end": 474.0, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today."}, {"start": 474.0, "end": 481.0, "text": " Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you."}, {"start": 481.0, "end": 509.0, "text": " Thanks for watching it for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Q9FGUii_4Ok
OpenAI DALL-E 2 - Top 10 Best Images! 🤯
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://openai.com/dall-e-2/ 🕊️ Follow us for more results on Twitter! https://twitter.com/twominutepapers 🧑‍🎨 Check out Felícia Zsolnai-Fehér's works: https://www.instagram.com/feliciart_86/ 🧑‍🎨 Judit Somogyvári's works: https://www.artstation.com/sheyenne https://www.instagram.com/somogyvari.art/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background: OpenAI DALL-E 2 Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Links: Schrödinger cat: https://twitter.com/twominutepapers/status/1537871684979642369 Simulation: https://twitter.com/twominutepapers/status/1537125769528459268 Self portrait: https://twitter.com/twominutepapers/status/1538238078640340992 Office worker: https://twitter.com/twominutepapers/status/1536753218297929729 Angry tiger: https://twitter.com/twominutepapers/status/1538239586182344705 https://twitter.com/BellaRender/status/1538270897181802500/photo/1 Self portrait: https://twitter.com/twominutepapers/status/1538238078640340992 Cat falling into black hole: https://twitter.com/twominutepapers/status/1537548655246311424 Cat meme: https://twitter.com/OpDarkside/status/1537552199261118466 DND battle map: https://twitter.com/twominutepapers/status/1537098423152922624 Walrus: https://twitter.com/twominutepapers/status/1538258566838210561 Chomsky’s classic: https://twitter.com/twominutepapers/status/1538234269683810304 Bob Ross: https://www.reddit.com/r/dalle2/comments/v4xut7/a_painting_of_bob_ross_painting_a_self_portrait/ Chapters: 0:00 - What is DALL-E 2? 0:56 - Novel images 1:40 - The Legendary Fox Scientist 2:14 - As good as an artist? 2:43 - Amazing new results! 3:08 - 1 3:25 - 2 3:43 - 3 4:10 - 4 4:26 - 5 4:57 - 6 5:27 - 7 5:37 - 8 6:22 - 9 6:35 - 10 7:07 - Plus 1 7:26 - Plus 2 7:70 - DALL-E 2 vs artist 8:13 - Changing the world Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAI #dalle
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. These amazing images were all made by an AI called Dolly II. This AI is endowed with a diffusion-based model, which means that when we ask it something, it starts out from noise and over time it iteratively refines this image to match our description better. And over time, magically an absolutely incredible image emerges. And this is a neural network that is given a ton of images to train on, and a piece of text that says what is in this image. That is one image caption pair. Dolly II is given millions and millions of these pairs. So, what can you do with all this knowledge? Well, the key takeaway here is that it does know or very little copying from this training data, but it truly comes up with novel images. How? Well, after it had seen a bunch of images of koalas and separately, a bunch of images of motorcycles, it starts to understand the concept of both and it will be able to combine the two together into a completely new image. And, luckily, due to the generosity of OpenAI, I also have access and I tried my best to push it to the limits. For instance, it can also generate an image of you fellow scholars marveling at this paper or in the process of reading an even more explosive paper. So good! I also tried this Fox scientist in an earlier episode, and it came out extremely well with tons of personality. And hold on to your papers, because due to popular requests, you are getting some more variants from my experiment with it. And Dolly II did not disappoint. This character can be recreated as a beautiful sculpture with several amazing variants, or indragonbo style, and it is also a true merch monster. But, earlier, we also tested it against the works of real artists. My opinion is that it is not yet as good as a world class artist. Here are some tests against Yudit Shomujiwari's work. And it is also not as good as the king of all donuts and real price. However, Dolly II is able to create 10,000 images every day. That is a different kind of value proposition. So, we asked today, after these amazing results, does it really have more surprises up the sleeve? Oh boy! By the end of this video, you will see how much of a silly question that is. So today, we are going to have a look at 10 recent amazing results, some of which are from prompt that I ran for you on Twitter. Let's see. For instance, this one is from you. One, shredding your scat and gripping the latest research papers. Loving the glasses and the scholarly setting. And, wait a minute, don't eat them. Come on! What results are you trying to hide? We will never know. Two, this is a screaming tiger trading cryptocurrencies. And yes, you are seeing correctly, the meme game is strong here. The resemblance is uncanny. So good! Three, I am always curious about what it thinks about itself. Let's ask it to make a self portrait of itself looking in the mirror. Well, it identifies as a robot. Okay, got it. That makes sense. And it is also a penguin, excuse me, and a robotic frog too that seems to be surprised by its own existence. How cool is this? Also, food for thought. Four, it can create battle maps for tabletop role-playing games. And I bet variants of these will also inspire some of you fellow scholars to dream up amazing new board games or even video games. Five, we can't have enough cats, can we? This is one falling into a black hole. I love how expressive the eyes are. The first two seem quite frightened about this journey. First time, right? But the third one, this chap seems really confident. And I love how it also inspired you fellow scholars with the memes that are now starting to appear on these dolly two images. Keep the game going. Six, I now introduce you to the King of Life, a deadbing wars with sunglasses. This one here is an excellent testament to the creative capabilities of dolly two. Clearly, its training set did not contain images of viruses in sunglasses and much less ones that are deadbing. It made this thing up on the spot. What a time to be alive! Seven, if you think that this world is a simulation and you wish to see what it looks like from the outside, dolly two has you covered. Oh yes. Now, eight, let's put it to the real test with Noam Tromsky's classic sentence. Here goes, colorless, green ideas sleep furiously. He put this together to show that you can create a sentence that is grammatically correct, but semantically it is complete nonsense. No meaning whatsoever. Well, dolly two, back to differ. Here are its ideas. This idea seems colorless with a bit of green paint from the bed and it seems that it was asleep and it is now furious. This one makes me feel that the AI can create anything, a true creativity machine. So nine, anything you say? Well, this one is from the amazing dolly two subreddit and it showcases Bob Ross painting himself in infinite recursion. This is one of the most impressive prompts I have seen yet. Wow! Ten, I really wanted to make a squirrel with a malfunctioning laptop. I tried to push dolly two to the limits in creating this with a huge variety of different styles and to say that I was not disappointed would be an understatement. Look at these beautiful results. My goodness. It can do Disney style, cartoon style. No paintings. I feel like this can do absolutely anything. And plus one, because I can never resist. This is Kermit the Frog in Fight Club. Oh yes, it seems to me that he went for the method actor root and made the role his own. A better not mess with this guy. And you know what? If you have been holding onto your paper so far, now squeeze that paper for plus two. I hear you asking, Karoi, why are you posting the thumbnail of an earlier two minute papers video here? What is this? Is it possible that? Oh yes. Mm-hmm. That is exactly right. Quite a few of our last thumbnails were made by dolly two. What a time to be alive. My wonderful wife Felicia Jean-Eiffahir knows the rest of the design work on these gorgeous thumbnails. I absolutely love them. She is so good. Not only in graphic design, but her pencil drawing was also compared to dolly two and she was one of the amazing artists who was able to beat it, at least in my opinion. So as you see, dolly two is transforming the world around us. And fast. Just a couple years ago, I was wondering whether we could take an AI, just say what we want, to get a high quality thumbnail for our videos and I thought, well, maybe in my lifetime. By the time I become an old man, perhaps such a thing would exist. And boom, just a couple more papers down the line and here we are. I truly cannot believe how good these results are. What a time to be alive. And don't forget, this was what dolly one was capable of, just a bit more than a year ago and now one more paper down the line and the results are outstanding. I wonder what dolly three will be capable of. I bet it is already in the works. So, does this get your mind going? Which work did you like best? Let me know in the comments below. If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance and hold on to your papers because with Lambda GPU cloud you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 4.88, "end": 10.76, "text": " These amazing images were all made by an AI called Dolly II."}, {"start": 10.76, "end": 16.96, "text": " This AI is endowed with a diffusion-based model, which means that when we ask it something,"}, {"start": 16.96, "end": 25.400000000000002, "text": " it starts out from noise and over time it iteratively refines this image to match our description better."}, {"start": 25.4, "end": 31.759999999999998, "text": " And over time, magically an absolutely incredible image emerges."}, {"start": 31.759999999999998, "end": 36.44, "text": " And this is a neural network that is given a ton of images to train on,"}, {"start": 36.44, "end": 40.64, "text": " and a piece of text that says what is in this image."}, {"start": 40.64, "end": 43.32, "text": " That is one image caption pair."}, {"start": 43.32, "end": 47.879999999999995, "text": " Dolly II is given millions and millions of these pairs."}, {"start": 47.879999999999995, "end": 51.2, "text": " So, what can you do with all this knowledge?"}, {"start": 51.2, "end": 57.96, "text": " Well, the key takeaway here is that it does know or very little copying from this training data,"}, {"start": 57.96, "end": 61.6, "text": " but it truly comes up with novel images."}, {"start": 61.6, "end": 62.6, "text": " How?"}, {"start": 62.6, "end": 68.44, "text": " Well, after it had seen a bunch of images of koalas and separately,"}, {"start": 68.44, "end": 74.44, "text": " a bunch of images of motorcycles, it starts to understand the concept of both"}, {"start": 74.44, "end": 80.48, "text": " and it will be able to combine the two together into a completely new image."}, {"start": 80.48, "end": 89.72, "text": " And, luckily, due to the generosity of OpenAI, I also have access and I tried my best to push it to the limits."}, {"start": 89.72, "end": 96.24000000000001, "text": " For instance, it can also generate an image of you fellow scholars marveling at this paper"}, {"start": 96.24000000000001, "end": 100.88000000000001, "text": " or in the process of reading an even more explosive paper."}, {"start": 100.88000000000001, "end": 102.28, "text": " So good!"}, {"start": 102.28, "end": 105.96000000000001, "text": " I also tried this Fox scientist in an earlier episode,"}, {"start": 105.96, "end": 110.72, "text": " and it came out extremely well with tons of personality."}, {"start": 110.72, "end": 119.32, "text": " And hold on to your papers, because due to popular requests, you are getting some more variants from my experiment with it."}, {"start": 119.32, "end": 122.39999999999999, "text": " And Dolly II did not disappoint."}, {"start": 122.39999999999999, "end": 129.2, "text": " This character can be recreated as a beautiful sculpture with several amazing variants,"}, {"start": 129.2, "end": 134.79999999999998, "text": " or indragonbo style, and it is also a true merch monster."}, {"start": 134.8, "end": 139.60000000000002, "text": " But, earlier, we also tested it against the works of real artists."}, {"start": 139.60000000000002, "end": 145.16000000000003, "text": " My opinion is that it is not yet as good as a world class artist."}, {"start": 145.16000000000003, "end": 149.36, "text": " Here are some tests against Yudit Shomujiwari's work."}, {"start": 149.36, "end": 155.4, "text": " And it is also not as good as the king of all donuts and real price."}, {"start": 155.4, "end": 161.64000000000001, "text": " However, Dolly II is able to create 10,000 images every day."}, {"start": 161.64000000000001, "end": 164.76000000000002, "text": " That is a different kind of value proposition."}, {"start": 164.76, "end": 168.88, "text": " So, we asked today, after these amazing results,"}, {"start": 168.88, "end": 172.95999999999998, "text": " does it really have more surprises up the sleeve?"}, {"start": 172.95999999999998, "end": 173.84, "text": " Oh boy!"}, {"start": 173.84, "end": 178.6, "text": " By the end of this video, you will see how much of a silly question that is."}, {"start": 178.6, "end": 183.6, "text": " So today, we are going to have a look at 10 recent amazing results,"}, {"start": 183.6, "end": 187.79999999999998, "text": " some of which are from prompt that I ran for you on Twitter."}, {"start": 187.79999999999998, "end": 188.95999999999998, "text": " Let's see."}, {"start": 188.95999999999998, "end": 191.64, "text": " For instance, this one is from you."}, {"start": 191.64, "end": 196.76, "text": " One, shredding your scat and gripping the latest research papers."}, {"start": 196.76, "end": 199.95999999999998, "text": " Loving the glasses and the scholarly setting."}, {"start": 199.95999999999998, "end": 203.44, "text": " And, wait a minute, don't eat them."}, {"start": 203.44, "end": 204.48, "text": " Come on!"}, {"start": 204.48, "end": 207.11999999999998, "text": " What results are you trying to hide?"}, {"start": 207.11999999999998, "end": 209.04, "text": " We will never know."}, {"start": 209.04, "end": 213.72, "text": " Two, this is a screaming tiger trading cryptocurrencies."}, {"start": 213.72, "end": 218.64, "text": " And yes, you are seeing correctly, the meme game is strong here."}, {"start": 218.64, "end": 221.0, "text": " The resemblance is uncanny."}, {"start": 221.0, "end": 222.0, "text": " So good!"}, {"start": 222.0, "end": 227.6, "text": " Three, I am always curious about what it thinks about itself."}, {"start": 227.6, "end": 233.12, "text": " Let's ask it to make a self portrait of itself looking in the mirror."}, {"start": 233.12, "end": 235.76, "text": " Well, it identifies as a robot."}, {"start": 235.76, "end": 237.56, "text": " Okay, got it."}, {"start": 237.56, "end": 239.08, "text": " That makes sense."}, {"start": 239.08, "end": 247.2, "text": " And it is also a penguin, excuse me, and a robotic frog too that seems to be surprised"}, {"start": 247.2, "end": 249.28, "text": " by its own existence."}, {"start": 249.28, "end": 250.96, "text": " How cool is this?"}, {"start": 250.96, "end": 253.12, "text": " Also, food for thought."}, {"start": 253.12, "end": 258.84000000000003, "text": " Four, it can create battle maps for tabletop role-playing games."}, {"start": 258.84000000000003, "end": 265.64, "text": " And I bet variants of these will also inspire some of you fellow scholars to dream up amazing"}, {"start": 265.64, "end": 269.12, "text": " new board games or even video games."}, {"start": 269.12, "end": 272.16, "text": " Five, we can't have enough cats, can we?"}, {"start": 272.16, "end": 275.52, "text": " This is one falling into a black hole."}, {"start": 275.52, "end": 278.68, "text": " I love how expressive the eyes are."}, {"start": 278.68, "end": 282.8, "text": " The first two seem quite frightened about this journey."}, {"start": 282.8, "end": 284.36, "text": " First time, right?"}, {"start": 284.36, "end": 288.8, "text": " But the third one, this chap seems really confident."}, {"start": 288.8, "end": 295.48, "text": " And I love how it also inspired you fellow scholars with the memes that are now starting to appear"}, {"start": 295.48, "end": 297.6, "text": " on these dolly two images."}, {"start": 297.6, "end": 298.92, "text": " Keep the game going."}, {"start": 298.92, "end": 306.56, "text": " Six, I now introduce you to the King of Life, a deadbing wars with sunglasses."}, {"start": 306.56, "end": 311.88, "text": " This one here is an excellent testament to the creative capabilities of dolly two."}, {"start": 311.88, "end": 318.52, "text": " Clearly, its training set did not contain images of viruses in sunglasses and much less"}, {"start": 318.52, "end": 320.4, "text": " ones that are deadbing."}, {"start": 320.4, "end": 323.36, "text": " It made this thing up on the spot."}, {"start": 323.36, "end": 325.16, "text": " What a time to be alive!"}, {"start": 325.16, "end": 331.0, "text": " Seven, if you think that this world is a simulation and you wish to see what it looks like"}, {"start": 331.0, "end": 334.64, "text": " from the outside, dolly two has you covered."}, {"start": 334.64, "end": 335.96, "text": " Oh yes."}, {"start": 335.96, "end": 342.08, "text": " Now, eight, let's put it to the real test with Noam Tromsky's classic sentence."}, {"start": 342.08, "end": 346.84, "text": " Here goes, colorless, green ideas sleep furiously."}, {"start": 346.84, "end": 352.79999999999995, "text": " He put this together to show that you can create a sentence that is grammatically correct,"}, {"start": 352.79999999999995, "end": 356.71999999999997, "text": " but semantically it is complete nonsense."}, {"start": 356.71999999999997, "end": 358.71999999999997, "text": " No meaning whatsoever."}, {"start": 358.71999999999997, "end": 362.08, "text": " Well, dolly two, back to differ."}, {"start": 362.08, "end": 364.08, "text": " Here are its ideas."}, {"start": 364.08, "end": 370.64, "text": " This idea seems colorless with a bit of green paint from the bed and it seems that it was"}, {"start": 370.64, "end": 373.68, "text": " asleep and it is now furious."}, {"start": 373.68, "end": 380.71999999999997, "text": " This one makes me feel that the AI can create anything, a true creativity machine."}, {"start": 380.71999999999997, "end": 383.64, "text": " So nine, anything you say?"}, {"start": 383.64, "end": 390.4, "text": " Well, this one is from the amazing dolly two subreddit and it showcases Bob Ross painting"}, {"start": 390.4, "end": 393.03999999999996, "text": " himself in infinite recursion."}, {"start": 393.04, "end": 397.64000000000004, "text": " This is one of the most impressive prompts I have seen yet."}, {"start": 397.64000000000004, "end": 398.64000000000004, "text": " Wow!"}, {"start": 398.64000000000004, "end": 403.8, "text": " Ten, I really wanted to make a squirrel with a malfunctioning laptop."}, {"start": 403.8, "end": 409.8, "text": " I tried to push dolly two to the limits in creating this with a huge variety of different"}, {"start": 409.8, "end": 415.52000000000004, "text": " styles and to say that I was not disappointed would be an understatement."}, {"start": 415.52000000000004, "end": 417.72, "text": " Look at these beautiful results."}, {"start": 417.72, "end": 419.0, "text": " My goodness."}, {"start": 419.0, "end": 422.40000000000003, "text": " It can do Disney style, cartoon style."}, {"start": 422.4, "end": 423.4, "text": " No paintings."}, {"start": 423.4, "end": 427.35999999999996, "text": " I feel like this can do absolutely anything."}, {"start": 427.35999999999996, "end": 431.08, "text": " And plus one, because I can never resist."}, {"start": 431.08, "end": 433.71999999999997, "text": " This is Kermit the Frog in Fight Club."}, {"start": 433.71999999999997, "end": 440.64, "text": " Oh yes, it seems to me that he went for the method actor root and made the role his own."}, {"start": 440.64, "end": 443.2, "text": " A better not mess with this guy."}, {"start": 443.2, "end": 444.44, "text": " And you know what?"}, {"start": 444.44, "end": 450.91999999999996, "text": " If you have been holding onto your paper so far, now squeeze that paper for plus two."}, {"start": 450.92, "end": 456.52000000000004, "text": " I hear you asking, Karoi, why are you posting the thumbnail of an earlier two minute papers"}, {"start": 456.52000000000004, "end": 457.92, "text": " video here?"}, {"start": 457.92, "end": 458.92, "text": " What is this?"}, {"start": 458.92, "end": 460.44, "text": " Is it possible that?"}, {"start": 460.44, "end": 461.44, "text": " Oh yes."}, {"start": 461.44, "end": 462.44, "text": " Mm-hmm."}, {"start": 462.44, "end": 464.96000000000004, "text": " That is exactly right."}, {"start": 464.96000000000004, "end": 468.64, "text": " Quite a few of our last thumbnails were made by dolly two."}, {"start": 468.64, "end": 470.8, "text": " What a time to be alive."}, {"start": 470.8, "end": 476.76, "text": " My wonderful wife Felicia Jean-Eiffahir knows the rest of the design work on these gorgeous"}, {"start": 476.76, "end": 477.76, "text": " thumbnails."}, {"start": 477.76, "end": 480.12, "text": " I absolutely love them."}, {"start": 480.12, "end": 481.72, "text": " She is so good."}, {"start": 481.72, "end": 488.0, "text": " Not only in graphic design, but her pencil drawing was also compared to dolly two and she"}, {"start": 488.0, "end": 493.84000000000003, "text": " was one of the amazing artists who was able to beat it, at least in my opinion."}, {"start": 493.84000000000003, "end": 498.76, "text": " So as you see, dolly two is transforming the world around us."}, {"start": 498.76, "end": 500.32, "text": " And fast."}, {"start": 500.32, "end": 506.88, "text": " Just a couple years ago, I was wondering whether we could take an AI, just say what we want,"}, {"start": 506.88, "end": 513.8, "text": " to get a high quality thumbnail for our videos and I thought, well, maybe in my lifetime."}, {"start": 513.8, "end": 519.32, "text": " By the time I become an old man, perhaps such a thing would exist."}, {"start": 519.32, "end": 524.6, "text": " And boom, just a couple more papers down the line and here we are."}, {"start": 524.6, "end": 529.04, "text": " I truly cannot believe how good these results are."}, {"start": 529.04, "end": 531.08, "text": " What a time to be alive."}, {"start": 531.08, "end": 536.72, "text": " And don't forget, this was what dolly one was capable of, just a bit more than a year"}, {"start": 536.72, "end": 543.0, "text": " ago and now one more paper down the line and the results are outstanding."}, {"start": 543.0, "end": 546.8000000000001, "text": " I wonder what dolly three will be capable of."}, {"start": 546.8000000000001, "end": 549.6800000000001, "text": " I bet it is already in the works."}, {"start": 549.6800000000001, "end": 552.24, "text": " So, does this get your mind going?"}, {"start": 552.24, "end": 554.0, "text": " Which work did you like best?"}, {"start": 554.0, "end": 555.84, "text": " Let me know in the comments below."}, {"start": 555.84, "end": 562.8000000000001, "text": " If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices"}, {"start": 562.8000000000001, "end": 566.28, "text": " in the world for GPU cloud compute."}, {"start": 566.28, "end": 569.1999999999999, "text": " No commitments or negotiation required."}, {"start": 569.1999999999999, "end": 575.76, "text": " Just sign up and launch an instance and hold on to your papers because with Lambda GPU"}, {"start": 575.76, "end": 585.16, "text": " cloud you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with"}, {"start": 585.16, "end": 586.16, "text": " AWS."}, {"start": 586.16, "end": 589.4, "text": " That's 73% savings."}, {"start": 589.4, "end": 592.88, "text": " Did I mention they also offer persistent storage?"}, {"start": 592.88, "end": 601.04, "text": " So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances,"}, {"start": 601.04, "end": 603.4399999999999, "text": " workstations or servers."}, {"start": 603.4399999999999, "end": 610.4399999999999, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 610.4399999999999, "end": 611.4399999999999, "text": " today."}, {"start": 611.44, "end": 623.44, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=19gzG-AsBNU
Watch This Dragon Grow Out Of Nothing! 🐲
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Differentiable Signed Distance Function Rendering" is available here: http://rgl.epfl.ch/publications/Vicini2022SDF 📝 Our works on differentiable material synthesis and neural rendering are available here (with code): https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Thank you Gordon Hanzmann-Johnson for catching an issue with a previous version of this video! 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today, we are going to learn how to take an object in the real world, make a digital copy of it, and place it into our virtual world. This can be done through something that we call a differentiable rendering. What is that? Well, simple, we take a photograph, and have an AI find a photorealistic material model that we can put into our light simulation program that matches it. What this means is that now we can essentially put this real material into a virtual world. This earlier work did very well with materials, but it did not capture the geometry. Later, this follow-up work was looking to improve the geometry side of things. Here, let's try to reproduce a 3D model from a bunch of triangles. Let's see, and look, this is lovely. Seeing these images gradually morph into the right solution is an absolutely beautiful site. But, of course, in this case, the materials are gone. And if we wish to focus more on materials, and we don't care about the geometry at all, we can use one of our previous papers for that. This one uses a learning algorithm to find out about your artistic vision and recommends really cool materials. Or, with this other work, you can even whip up a fake material in Photoshop and it will magically find a digital photorealistic material that matches it. But, once again, no geometry, materials only. But, wait a minute, am I reading this right? With previous method, number one, we get the materials, but no geometry. Or, previous method, number two, no materials, but the geometry is so much more detailed. Or, previous method, number three, excellent materials, but once again, no geometry. We can't have it all, can we? So, is that it? We can't really take a real object and make a virtual copy of it? Is the dream dead? Well, don't despair quite yet. This new technique might just be what we are looking for. Although, I am not so sure, this problem is super challenging. Let's try this chair and see what happens. Here are a bunch of images of it. Now, your turn, little algorithm. And, we start out from a sphere. Well, I will believe this when I see it. Good luck. And, hmm, the shape is starting to change and, wait. Are you seeing what I am seeing? Yes, yes, yes. The material is also slowly starting to change. We are getting not only geometry, but materials too. Now, based on previous works, our expectation is that the result will be extremely coarse. Well, hold on to your papers and let's see how much detail we end up with. Oh my, that is so much better. Loving it. Actually, let's check. This was the previous technique and this is the new one. My goodness, I have to say there is no contest here. The new one is so much more detailed. So good. And, it works on a bunch of other examples too. These are not perfect by any means, but this kind of improvement, just one more paper down the line, that is absolutely incredible. The pace of progress in computer graphics research is nothing short of amazing, especially when we look at all the magic scientists are doing in Vencell Jakobs lab. And, the whole process took less than an hour. That is a considerable amount of time, but an artist can leave a bunch of these computations to cook overnight and by morning, a bunch of already pretty good quality virtual objects will appear that we can place in our virtual worlds. How cool is that? What a time to be alive. Make sure to check out the whole paper, it has crystal clear mathematics and tons of gorgeous images. The link is available in the video description. I believe this will be an excellent tool in democratizing asset creation for games, animation movies and all kinds of virtual worlds. What a time to be alive. So, what would you use this for? Do you have some cool ideas? Let me know in the comments below. This episode has been supported by Kohir AI. Kohir builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts or social media posts to create your own custom models to understand text, or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Kohir.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir."}, {"start": 4.64, "end": 9.36, "text": " Today, we are going to learn how to take an object in the real world,"}, {"start": 9.36, "end": 14.8, "text": " make a digital copy of it, and place it into our virtual world."}, {"start": 14.8, "end": 19.84, "text": " This can be done through something that we call a differentiable rendering."}, {"start": 19.84, "end": 21.400000000000002, "text": " What is that?"}, {"start": 21.400000000000002, "end": 28.8, "text": " Well, simple, we take a photograph, and have an AI find a photorealistic material model"}, {"start": 28.8, "end": 33.6, "text": " that we can put into our light simulation program that matches it."}, {"start": 33.6, "end": 40.88, "text": " What this means is that now we can essentially put this real material into a virtual world."}, {"start": 40.88, "end": 47.120000000000005, "text": " This earlier work did very well with materials, but it did not capture the geometry."}, {"start": 47.120000000000005, "end": 52.56, "text": " Later, this follow-up work was looking to improve the geometry side of things."}, {"start": 52.56, "end": 58.32, "text": " Here, let's try to reproduce a 3D model from a bunch of triangles."}, {"start": 58.32, "end": 63.68, "text": " Let's see, and look, this is lovely."}, {"start": 63.68, "end": 70.4, "text": " Seeing these images gradually morph into the right solution is an absolutely beautiful site."}, {"start": 70.4, "end": 74.48, "text": " But, of course, in this case, the materials are gone."}, {"start": 74.48, "end": 80.96000000000001, "text": " And if we wish to focus more on materials, and we don't care about the geometry at all,"}, {"start": 80.96000000000001, "end": 84.08, "text": " we can use one of our previous papers for that."}, {"start": 84.08, "end": 89.6, "text": " This one uses a learning algorithm to find out about your artistic vision"}, {"start": 89.6, "end": 93.2, "text": " and recommends really cool materials."}, {"start": 93.2, "end": 99.28, "text": " Or, with this other work, you can even whip up a fake material in Photoshop"}, {"start": 99.28, "end": 106.0, "text": " and it will magically find a digital photorealistic material that matches it."}, {"start": 106.0, "end": 111.03999999999999, "text": " But, once again, no geometry, materials only."}, {"start": 111.04, "end": 114.56, "text": " But, wait a minute, am I reading this right?"}, {"start": 114.56, "end": 120.32000000000001, "text": " With previous method, number one, we get the materials, but no geometry."}, {"start": 120.32000000000001, "end": 128.0, "text": " Or, previous method, number two, no materials, but the geometry is so much more detailed."}, {"start": 128.0, "end": 135.04000000000002, "text": " Or, previous method, number three, excellent materials, but once again, no geometry."}, {"start": 135.04000000000002, "end": 137.12, "text": " We can't have it all, can we?"}, {"start": 137.12, "end": 143.20000000000002, "text": " So, is that it? We can't really take a real object and make a virtual copy of it?"}, {"start": 143.20000000000002, "end": 144.72, "text": " Is the dream dead?"}, {"start": 144.72, "end": 147.04, "text": " Well, don't despair quite yet."}, {"start": 147.04, "end": 150.48000000000002, "text": " This new technique might just be what we are looking for."}, {"start": 150.48000000000002, "end": 155.52, "text": " Although, I am not so sure, this problem is super challenging."}, {"start": 155.52, "end": 158.8, "text": " Let's try this chair and see what happens."}, {"start": 158.8, "end": 161.36, "text": " Here are a bunch of images of it."}, {"start": 161.36, "end": 164.0, "text": " Now, your turn, little algorithm."}, {"start": 164.0, "end": 167.2, "text": " And, we start out from a sphere."}, {"start": 167.2, "end": 169.76, "text": " Well, I will believe this when I see it."}, {"start": 169.76, "end": 170.96, "text": " Good luck."}, {"start": 170.96, "end": 177.44, "text": " And, hmm, the shape is starting to change and, wait."}, {"start": 177.44, "end": 179.92, "text": " Are you seeing what I am seeing?"}, {"start": 179.92, "end": 181.44, "text": " Yes, yes, yes."}, {"start": 181.44, "end": 185.68, "text": " The material is also slowly starting to change."}, {"start": 185.68, "end": 190.32, "text": " We are getting not only geometry, but materials too."}, {"start": 190.32, "end": 196.72, "text": " Now, based on previous works, our expectation is that the result will be extremely coarse."}, {"start": 196.72, "end": 202.16, "text": " Well, hold on to your papers and let's see how much detail we end up with."}, {"start": 203.04, "end": 206.32, "text": " Oh my, that is so much better."}, {"start": 206.32, "end": 207.04, "text": " Loving it."}, {"start": 207.51999999999998, "end": 209.35999999999999, "text": " Actually, let's check."}, {"start": 209.35999999999999, "end": 212.95999999999998, "text": " This was the previous technique and this is the new one."}, {"start": 213.92, "end": 218.0, "text": " My goodness, I have to say there is no contest here."}, {"start": 218.0, "end": 220.16, "text": " The new one is so much more detailed."}, {"start": 220.96, "end": 221.52, "text": " So good."}, {"start": 222.16, "end": 225.2, "text": " And, it works on a bunch of other examples too."}, {"start": 225.84, "end": 230.48, "text": " These are not perfect by any means, but this kind of improvement,"}, {"start": 230.48, "end": 235.76, "text": " just one more paper down the line, that is absolutely incredible."}, {"start": 235.76, "end": 240.48, "text": " The pace of progress in computer graphics research is nothing short of amazing,"}, {"start": 240.48, "end": 246.32, "text": " especially when we look at all the magic scientists are doing in Vencell Jakobs lab."}, {"start": 246.32, "end": 249.76, "text": " And, the whole process took less than an hour."}, {"start": 249.76, "end": 256.32, "text": " That is a considerable amount of time, but an artist can leave a bunch of these computations"}, {"start": 256.32, "end": 263.03999999999996, "text": " to cook overnight and by morning, a bunch of already pretty good quality virtual objects"}, {"start": 263.03999999999996, "end": 266.08, "text": " will appear that we can place in our virtual worlds."}, {"start": 266.88, "end": 268.56, "text": " How cool is that?"}, {"start": 268.56, "end": 270.15999999999997, "text": " What a time to be alive."}, {"start": 270.16, "end": 278.16, "text": " Make sure to check out the whole paper, it has crystal clear mathematics and tons of gorgeous images."}, {"start": 278.16, "end": 281.12, "text": " The link is available in the video description."}, {"start": 281.12, "end": 287.28000000000003, "text": " I believe this will be an excellent tool in democratizing asset creation for games,"}, {"start": 287.28000000000003, "end": 291.12, "text": " animation movies and all kinds of virtual worlds."}, {"start": 291.12, "end": 293.04, "text": " What a time to be alive."}, {"start": 293.04, "end": 295.76000000000005, "text": " So, what would you use this for?"}, {"start": 295.76000000000005, "end": 297.52000000000004, "text": " Do you have some cool ideas?"}, {"start": 297.52000000000004, "end": 299.28000000000003, "text": " Let me know in the comments below."}, {"start": 299.28, "end": 302.64, "text": " This episode has been supported by Kohir AI."}, {"start": 302.64, "end": 309.84, "text": " Kohir builds large language models and makes them available through an API so businesses can add"}, {"start": 309.84, "end": 316.47999999999996, "text": " advanced language understanding to their system or app quickly with just one line of code."}, {"start": 316.47999999999996, "end": 321.11999999999995, "text": " You can use your own data, whether it's text from customer service requests,"}, {"start": 321.11999999999995, "end": 327.76, "text": " legal contracts or social media posts to create your own custom models to understand text,"}, {"start": 327.76, "end": 330.24, "text": " or even generated."}, {"start": 330.88, "end": 337.2, "text": " For instance, it can be used to automatically determine whether your messages are about your business hours,"}, {"start": 337.2, "end": 345.12, "text": " returns or shipping, or it can be used to generate a list of possible sentences you can use"}, {"start": 345.12, "end": 346.48, "text": " for your product descriptions."}, {"start": 347.03999999999996, "end": 353.2, "text": " Make sure to go to Kohir.ai slash papers or click the link in the video description"}, {"start": 353.2, "end": 354.71999999999997, "text": " and give it a try today."}, {"start": 355.28, "end": 356.88, "text": " It's super easy to use."}, {"start": 356.88, "end": 360.88, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MO2K0JXAedM
NVIDIA’s AI Nailed Human Face Synthesis! 👩‍🎓
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (Thank you Soumik!): http://wandb.me/styleGAN-NADA 📝 The paper "StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators" is available here: https://stylegan-nada.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia #stylegan
And dear fellow scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. We are back, and today we are going to marvel at the capabilities of Envides' incredible image-generation AI. And we will see together that we can all become witches. Previously, they were able to create an incredible AI that can take a piece of text from us and create an appropriate image, or it could also give us a little more artistic control as it could even transform a rough sketch into a beautiful output image. A variant generation was also possible. Or if we didn't feel like drawing, it could also take segmentation maps which specify which region should be what. The output was again a beautiful photorealistic image. So, these were mostly landscapes. And what about human faces? Can we generate human faces too? And I wonder if we could also edit them. If so, how? Well, Envidea already has incredible works for generating human faces. Look, this is their StarGAN 3 AI, and as you see, my goodness. These are incredible high-quality faces and we even have a tiny bit of control over the images. Not a great deal, but a little control is there. It works on art pieces too, by the way. And this new technique finally gives us a little more creative freedom in editing these images in five different ways. How? Well, check this out. Oh yes, one, we start sketching and outcomes an image of this quality. Wow, this truly is a testament to the power of AI these days. It doesn't just generate images out of thin air, but here we can really harness the power of this AI. And the best part is that we don't even need to be an expert at drawing. So cool. Two, if we don't feel like drawing, but we would really like to see some well-known people as cubist paintings, this can do that too. Three, if we have an image with snow, and we feel like chocolate, vanilla, or cherry ice cream would be so much better for snowboarding, also not a problem. And here comes my favorite. Four, we can make people into witches. I think I might make a good one. Look, this is Obi-Wan Karoi, a synthetic image of me made by an AI with an added beard. And look, you don't want to mess with Witcher Karoi, do you? So who else would make a great Witcher? Well, here are some of my favorites. For instance, Grimes, Beer Demand, Skrillex, Robin Williams, Obama, and the Rock. All excellent witches. And five, it can not only produce these images, but even interpolate between them. What does that mean? This means that not only the final image, but intermediate images can also be generated. Look, we are going from a photo to a sketch to a Mona Lisa style painting. While we are looking at these wonderful results, I would like to send a huge thank you to the authors, and they were very patient and took quite a bit of time, their busy day, to delight you fellow scholars with these animations. As a result, some of these animations you can only see here on two-minute papers. That is a huge honor. Thank you so much. So how easy is it to use? Well, so easy that it only takes a minute to produce such excellent results, and it likely means that this will be an excellent tool in democratizing artistic image editing, and giving it into the hands of everyone. Now, make no mistake, this is a research paper, not a product. Yet, but Nvidia has an excellent record of transferring these works into real products. For instance, they published a similar audio to face paper in 2017, and now, just a few years later, it is out there for everyone to use. How cool is that? I really hope that this work will also have a similar fate. What a time to be alive! So, does this get your mind going? What would you use this for? Let me know in the comments below. What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one, to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wades and Biasis is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wades and Biasis for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " And dear fellow scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 13.76, "text": " We are back, and today we are going to marvel at the capabilities of Envides' incredible image-generation AI."}, {"start": 13.76, "end": 18.400000000000002, "text": " And we will see together that we can all become witches."}, {"start": 18.400000000000002, "end": 24.8, "text": " Previously, they were able to create an incredible AI that can take a piece of text from us"}, {"start": 24.8, "end": 31.200000000000003, "text": " and create an appropriate image, or it could also give us a little more artistic control"}, {"start": 31.200000000000003, "end": 37.6, "text": " as it could even transform a rough sketch into a beautiful output image."}, {"start": 37.6, "end": 40.56, "text": " A variant generation was also possible."}, {"start": 40.56, "end": 45.760000000000005, "text": " Or if we didn't feel like drawing, it could also take segmentation maps"}, {"start": 45.760000000000005, "end": 49.040000000000006, "text": " which specify which region should be what."}, {"start": 49.040000000000006, "end": 53.44, "text": " The output was again a beautiful photorealistic image."}, {"start": 53.44, "end": 56.4, "text": " So, these were mostly landscapes."}, {"start": 56.4, "end": 59.199999999999996, "text": " And what about human faces?"}, {"start": 59.199999999999996, "end": 61.839999999999996, "text": " Can we generate human faces too?"}, {"start": 61.839999999999996, "end": 65.44, "text": " And I wonder if we could also edit them."}, {"start": 65.44, "end": 67.28, "text": " If so, how?"}, {"start": 67.28, "end": 73.36, "text": " Well, Envidea already has incredible works for generating human faces."}, {"start": 73.36, "end": 79.68, "text": " Look, this is their StarGAN 3 AI, and as you see, my goodness."}, {"start": 79.68, "end": 87.92, "text": " These are incredible high-quality faces and we even have a tiny bit of control over the images."}, {"start": 87.92, "end": 92.0, "text": " Not a great deal, but a little control is there."}, {"start": 92.0, "end": 94.96000000000001, "text": " It works on art pieces too, by the way."}, {"start": 94.96000000000001, "end": 99.92000000000002, "text": " And this new technique finally gives us a little more creative freedom"}, {"start": 99.92000000000002, "end": 103.2, "text": " in editing these images in five different ways."}, {"start": 104.0, "end": 104.80000000000001, "text": " How?"}, {"start": 104.80000000000001, "end": 106.32000000000001, "text": " Well, check this out."}, {"start": 106.32, "end": 113.03999999999999, "text": " Oh yes, one, we start sketching and outcomes an image of this quality."}, {"start": 113.83999999999999, "end": 119.44, "text": " Wow, this truly is a testament to the power of AI these days."}, {"start": 119.44, "end": 122.63999999999999, "text": " It doesn't just generate images out of thin air,"}, {"start": 122.63999999999999, "end": 126.72, "text": " but here we can really harness the power of this AI."}, {"start": 126.72, "end": 131.12, "text": " And the best part is that we don't even need to be an expert at drawing."}, {"start": 131.84, "end": 132.48, "text": " So cool."}, {"start": 132.48, "end": 139.04, "text": " Two, if we don't feel like drawing, but we would really like to see some well-known people"}, {"start": 139.04, "end": 142.07999999999998, "text": " as cubist paintings, this can do that too."}, {"start": 142.79999999999998, "end": 148.79999999999998, "text": " Three, if we have an image with snow, and we feel like chocolate, vanilla,"}, {"start": 148.79999999999998, "end": 153.04, "text": " or cherry ice cream would be so much better for snowboarding,"}, {"start": 153.6, "end": 155.12, "text": " also not a problem."}, {"start": 155.76, "end": 157.35999999999999, "text": " And here comes my favorite."}, {"start": 158.07999999999998, "end": 161.12, "text": " Four, we can make people into witches."}, {"start": 161.12, "end": 163.68, "text": " I think I might make a good one."}, {"start": 164.48000000000002, "end": 172.56, "text": " Look, this is Obi-Wan Karoi, a synthetic image of me made by an AI with an added beard."}, {"start": 173.28, "end": 178.24, "text": " And look, you don't want to mess with Witcher Karoi, do you?"}, {"start": 180.0, "end": 182.56, "text": " So who else would make a great Witcher?"}, {"start": 183.12, "end": 185.76, "text": " Well, here are some of my favorites."}, {"start": 185.76, "end": 188.4, "text": " For instance, Grimes, Beer Demand,"}, {"start": 188.4, "end": 190.4, "text": " Skrillex,"}, {"start": 192.8, "end": 193.68, "text": " Robin Williams,"}, {"start": 195.20000000000002, "end": 195.84, "text": " Obama,"}, {"start": 197.36, "end": 198.4, "text": " and the Rock."}, {"start": 199.44, "end": 200.96, "text": " All excellent witches."}, {"start": 201.6, "end": 207.76, "text": " And five, it can not only produce these images, but even interpolate between them."}, {"start": 208.4, "end": 209.76, "text": " What does that mean?"}, {"start": 209.76, "end": 216.32, "text": " This means that not only the final image, but intermediate images can also be generated."}, {"start": 216.32, "end": 223.68, "text": " Look, we are going from a photo to a sketch to a Mona Lisa style painting."}, {"start": 224.23999999999998, "end": 229.92, "text": " While we are looking at these wonderful results, I would like to send a huge thank you to the authors,"}, {"start": 229.92, "end": 234.88, "text": " and they were very patient and took quite a bit of time, their busy day,"}, {"start": 234.88, "end": 238.07999999999998, "text": " to delight you fellow scholars with these animations."}, {"start": 238.07999999999998, "end": 243.35999999999999, "text": " As a result, some of these animations you can only see here on two-minute papers."}, {"start": 243.35999999999999, "end": 245.04, "text": " That is a huge honor."}, {"start": 245.04, "end": 246.64, "text": " Thank you so much."}, {"start": 246.64, "end": 248.79999999999998, "text": " So how easy is it to use?"}, {"start": 249.35999999999999, "end": 255.2, "text": " Well, so easy that it only takes a minute to produce such excellent results,"}, {"start": 255.2, "end": 262.15999999999997, "text": " and it likely means that this will be an excellent tool in democratizing artistic image editing,"}, {"start": 262.15999999999997, "end": 265.03999999999996, "text": " and giving it into the hands of everyone."}, {"start": 265.59999999999997, "end": 270.15999999999997, "text": " Now, make no mistake, this is a research paper, not a product."}, {"start": 270.16, "end": 276.8, "text": " Yet, but Nvidia has an excellent record of transferring these works into real products."}, {"start": 276.8, "end": 281.92, "text": " For instance, they published a similar audio to face paper in 2017,"}, {"start": 281.92, "end": 287.52000000000004, "text": " and now, just a few years later, it is out there for everyone to use."}, {"start": 287.52000000000004, "end": 289.28000000000003, "text": " How cool is that?"}, {"start": 289.28000000000003, "end": 293.28000000000003, "text": " I really hope that this work will also have a similar fate."}, {"start": 293.28000000000003, "end": 295.28000000000003, "text": " What a time to be alive!"}, {"start": 295.28000000000003, "end": 297.76000000000005, "text": " So, does this get your mind going?"}, {"start": 297.76000000000005, "end": 299.76000000000005, "text": " What would you use this for?"}, {"start": 299.76, "end": 301.92, "text": " Let me know in the comments below."}, {"start": 301.92, "end": 305.92, "text": " What you see here is a report of this exact paper we have talked about,"}, {"start": 305.92, "end": 308.24, "text": " which was made by Wades and Biasis."}, {"start": 308.24, "end": 310.32, "text": " I put a link to it in the description."}, {"start": 310.32, "end": 311.44, "text": " Make sure to have a look."}, {"start": 311.44, "end": 314.8, "text": " I think it helps you understand this paper better."}, {"start": 314.8, "end": 320.15999999999997, "text": " Wades and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 320.15999999999997, "end": 324.32, "text": " Using their system, you can create beautiful reports like this one,"}, {"start": 324.32, "end": 327.44, "text": " to explain your findings to your colleagues better."}, {"start": 327.44, "end": 331.2, "text": " It is used by many prestigious labs, including OpenAI,"}, {"start": 331.2, "end": 334.24, "text": " Toyota Research, GitHub, and more."}, {"start": 334.24, "end": 339.2, "text": " And the best part is that Wades and Biasis is free for all individuals,"}, {"start": 339.2, "end": 342.16, "text": " academics, and open source projects."}, {"start": 342.16, "end": 346.8, "text": " Make sure to visit them through wnb.com slash papers,"}, {"start": 346.8, "end": 349.44, "text": " or just click the link in the video description,"}, {"start": 349.44, "end": 352.15999999999997, "text": " and you can get a free demo today."}, {"start": 352.15999999999997, "end": 355.84, "text": " Our thanks to Wades and Biasis for their long-standing support,"}, {"start": 355.84, "end": 358.88, "text": " and for helping us make better videos for you."}, {"start": 358.88, "end": 361.11999999999995, "text": " Thanks for watching and for your generous support,"}, {"start": 361.12, "end": 387.84000000000003, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=a0ubtHxj1UA
Google AI Simulates Evolution On A Computer! 🦖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (Thank you Soumik Rakshit!): https://wandb.me/modern-evolution 📝 The paper "Modern Evolution Strategies for Creativity: Fitting Concrete Images and Abstract Concepts" is available here: https://es-clip.github.io/ 🧑‍🎨 My previous genetic algorithm implementation for the Mona Lisa problem (+ some explanation in the video below): https://users.cg.tuwien.ac.at/zsolnai/gfx/mona_lisa_parallel_genetic_algorithm/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail image: DALL-E 2. Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Károly Zsolnai-Fehér. Today is going to be all about simulating evolution on our computers. This can build virtual cars, virtual creatures, and even paint the Mona Lisa. These all sound amazing, but evolution on a computer, how? A few years ago, a really fun online app surfaced that used a genetic algorithm to evolve the morphology of a simple 2D car with the goal of having it roll as far away from a starting point as possible. A genetic algorithm? What is that? Well, it is a super simple technique where we start out from a set of random solutions and likely find out that none of them really work too well. However, we start combining and mutating, or in other words, changing small parts of this solution until... oh yes, something starts to move. Now, as soon as at least one wheel is placed correctly, the algorithm will recognize that this one rolled so much further and keep the genes of this solution in the pool to breed no similar solutions from it. A similar concept can also be used to design the optimal morphology of a virtual creature to make it skip forward faster. And look, that is so cool! Over time, the technique learns to more efficiently navigate a flat terrain by redesigning its legs that are now reminiscent of small springs and uses them to skip its way forward. And here comes something even cooler if we change the terrain, the design of an effective agent also changes accordingly. And the super interesting part here is that it came up with an asymmetric design that is able to climb stairs and travel uphill efficiently. Loving it! And in this new paper, scientists at Google Brain tried to turbocharge an evolutionary algorithm to be able to take a bunch of transparent triangles and get it to reorganize them so they will paint the Mona Lisa for us. Or any image in particular. The technique they are using is called Evolution Strategies and they say that it is much faster at this than previous techniques. Well, I will believe it when I see it. So now, hold onto your papers and let's see together. Here is a basic evolutionary algorithm after 10,000 steps. Well, it is getting there, but it's not that great. And let's see how well their new method does with the same amount of steps. Wow! My goodness! That is so much better! In fact, their 10,000 steps is close to equivalent to half a million steps with a previous algorithm. And we are not done yet, not even close. The paper has two more amazing surprises. Surprise number one. We can even combine it with Open AI technique called Clip. This learns about pairs of images and their captions describing these images, which is an important part of Dolly too. They are amazing image generator AI. This could take completely outlandish concepts and create a beautiful photorealistic images out of it that are often as good as we would expect from a good human artist. Scholars holding onto their papers, a cyber frog, you name it. So, get this. Similarly to that, we can even write a piece of text and the evolutionary triangle builder will try to create an image that fits this text description. With that, we can request a triangle version of a self portrait, a human, and look at how similar they are. Food for thought. But it can also try to draw what this new world, that is remarkable. Look at how beautifully it boils it down to its essence with as few as 200 or even just 25 triangles. Loving it. Also, drawing the Google headquarters, no problem at all. And here comes surprise number two. The authors claim that it is even faster than a differentiable renderer. These are really powerful optimization techniques that can even grow this beautiful statue out of nothing and do all that in just a few steps. So, claiming that it is even faster than that, well, that is very ambitious. Let's have a look. And, oh yes, as expected, the differentiable technique creates a beautiful image very quickly. Now try to beat that. Wow, the new method converges to the final image even quicker. That speed is simply stunning. Interestingly, they also have different styles. The differentiable renderer introduces textures that are not present in the original image. Well, the new technique uses large triangles to create a smooth approximation of the background and the hair and uses the smaller ones to get a better approximation of the intricate details of the face. Loving it. Now, let's ramp up the challenge and test these a little more and add some prompts. And, oh yes, this is why the differentiable technique flounders on the prompts. Look, it almost seems to try to do everything all at once. While the new technique starts out from a fair approximation and converges to a great solution super quickly. I hope that this new take on evolution strategies will be plugged in to applications where differentiable techniques do well and perhaps do even better. That would be absolutely amazing because these are a treasure trove of science fiction-like applications. For instance, we would be able to almost instantly solve this billiard game with just the right amount of force and from the right direction such that the blue ball ends up close to the black spot. Or simulate ink with a checkerboard pattern and exert just the appropriate forces so that it forms exactly the Yin Yang symbol shortly after. Or, here comes a previous science fiction-like example. This previous differentiable technique adds carefully crafted repos to the water to make sure that it ends up in a state that distorts the image of the squirrel in a way that a powerful and well-known neural network sees it not as a squirrel, but as a goldfish. Wow! And if this new evolutionary technique could do all of these tasks but better and faster, sign me up right now. What a time to be alive! What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. Weight and biases provides tools to track your experiments in your deep learning projects. Using their system you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that weight and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.8, "end": 10.88, "text": " Today is going to be all about simulating evolution on our computers."}, {"start": 10.88, "end": 17.28, "text": " This can build virtual cars, virtual creatures, and even paint the Mona Lisa."}, {"start": 17.84, "end": 23.44, "text": " These all sound amazing, but evolution on a computer, how?"}, {"start": 23.44, "end": 32.24, "text": " A few years ago, a really fun online app surfaced that used a genetic algorithm to evolve the morphology"}, {"start": 32.24, "end": 39.52, "text": " of a simple 2D car with the goal of having it roll as far away from a starting point as possible."}, {"start": 40.16, "end": 47.6, "text": " A genetic algorithm? What is that? Well, it is a super simple technique where we start out from a"}, {"start": 47.6, "end": 53.44, "text": " set of random solutions and likely find out that none of them really work too well."}, {"start": 54.160000000000004, "end": 62.24, "text": " However, we start combining and mutating, or in other words, changing small parts of this solution"}, {"start": 62.24, "end": 71.92, "text": " until... oh yes, something starts to move. Now, as soon as at least one wheel is placed correctly,"}, {"start": 71.92, "end": 78.48, "text": " the algorithm will recognize that this one rolled so much further and keep the genes of this"}, {"start": 78.48, "end": 86.0, "text": " solution in the pool to breed no similar solutions from it. A similar concept can also be used to"}, {"start": 86.0, "end": 92.24000000000001, "text": " design the optimal morphology of a virtual creature to make it skip forward faster."}, {"start": 92.8, "end": 100.64, "text": " And look, that is so cool! Over time, the technique learns to more efficiently navigate a flat"}, {"start": 100.64, "end": 108.4, "text": " terrain by redesigning its legs that are now reminiscent of small springs and uses them to skip"}, {"start": 108.4, "end": 116.08, "text": " its way forward. And here comes something even cooler if we change the terrain, the design of an"}, {"start": 116.08, "end": 123.36, "text": " effective agent also changes accordingly. And the super interesting part here is that it came up"}, {"start": 123.36, "end": 130.64, "text": " with an asymmetric design that is able to climb stairs and travel uphill efficiently."}, {"start": 131.52, "end": 138.0, "text": " Loving it! And in this new paper, scientists at Google Brain tried to turbocharge"}, {"start": 138.0, "end": 144.56, "text": " an evolutionary algorithm to be able to take a bunch of transparent triangles and get it to"}, {"start": 144.56, "end": 151.84, "text": " reorganize them so they will paint the Mona Lisa for us. Or any image in particular. The technique"}, {"start": 151.84, "end": 158.24, "text": " they are using is called Evolution Strategies and they say that it is much faster at this than"}, {"start": 158.24, "end": 165.44, "text": " previous techniques. Well, I will believe it when I see it. So now, hold onto your papers and let's"}, {"start": 165.44, "end": 174.0, "text": " see together. Here is a basic evolutionary algorithm after 10,000 steps. Well, it is getting there,"}, {"start": 174.0, "end": 182.16, "text": " but it's not that great. And let's see how well their new method does with the same amount of steps."}, {"start": 182.16, "end": 190.4, "text": " Wow! My goodness! That is so much better! In fact, their 10,000 steps is close to equivalent"}, {"start": 190.4, "end": 198.0, "text": " to half a million steps with a previous algorithm. And we are not done yet, not even close."}, {"start": 198.0, "end": 205.68, "text": " The paper has two more amazing surprises. Surprise number one. We can even combine it with Open AI"}, {"start": 205.68, "end": 213.12, "text": " technique called Clip. This learns about pairs of images and their captions describing these images,"}, {"start": 213.12, "end": 220.16, "text": " which is an important part of Dolly too. They are amazing image generator AI. This could take"}, {"start": 220.16, "end": 227.04, "text": " completely outlandish concepts and create a beautiful photorealistic images out of it that are"}, {"start": 227.04, "end": 234.4, "text": " often as good as we would expect from a good human artist. Scholars holding onto their papers,"}, {"start": 234.4, "end": 242.23999999999998, "text": " a cyber frog, you name it. So, get this. Similarly to that, we can even write a piece of text"}, {"start": 242.23999999999998, "end": 248.56, "text": " and the evolutionary triangle builder will try to create an image that fits this text description."}, {"start": 249.12, "end": 254.64, "text": " With that, we can request a triangle version of a self portrait, a human,"}, {"start": 254.64, "end": 262.8, "text": " and look at how similar they are. Food for thought. But it can also try to draw what this new world,"}, {"start": 263.59999999999997, "end": 270.4, "text": " that is remarkable. Look at how beautifully it boils it down to its essence with as few"}, {"start": 270.4, "end": 278.47999999999996, "text": " as 200 or even just 25 triangles. Loving it. Also, drawing the Google headquarters,"}, {"start": 278.48, "end": 287.20000000000005, "text": " no problem at all. And here comes surprise number two. The authors claim that it is even faster"}, {"start": 287.20000000000005, "end": 294.40000000000003, "text": " than a differentiable renderer. These are really powerful optimization techniques that can even grow"}, {"start": 294.40000000000003, "end": 302.48, "text": " this beautiful statue out of nothing and do all that in just a few steps. So, claiming that it"}, {"start": 302.48, "end": 311.36, "text": " is even faster than that, well, that is very ambitious. Let's have a look. And, oh yes,"}, {"start": 311.36, "end": 318.88, "text": " as expected, the differentiable technique creates a beautiful image very quickly. Now try to beat that."}, {"start": 320.08000000000004, "end": 328.16, "text": " Wow, the new method converges to the final image even quicker. That speed is simply stunning."}, {"start": 328.16, "end": 334.72, "text": " Interestingly, they also have different styles. The differentiable renderer introduces textures"}, {"start": 334.72, "end": 342.40000000000003, "text": " that are not present in the original image. Well, the new technique uses large triangles to create"}, {"start": 342.40000000000003, "end": 349.20000000000005, "text": " a smooth approximation of the background and the hair and uses the smaller ones to get a better"}, {"start": 349.20000000000005, "end": 356.64000000000004, "text": " approximation of the intricate details of the face. Loving it. Now, let's ramp up the challenge"}, {"start": 356.64, "end": 365.2, "text": " and test these a little more and add some prompts. And, oh yes, this is why the differentiable"}, {"start": 365.2, "end": 372.0, "text": " technique flounders on the prompts. Look, it almost seems to try to do everything all at once."}, {"start": 372.64, "end": 379.2, "text": " While the new technique starts out from a fair approximation and converges to a great solution"}, {"start": 379.2, "end": 386.08, "text": " super quickly. I hope that this new take on evolution strategies will be plugged in to applications"}, {"start": 386.08, "end": 392.96, "text": " where differentiable techniques do well and perhaps do even better. That would be absolutely"}, {"start": 392.96, "end": 398.96, "text": " amazing because these are a treasure trove of science fiction-like applications. For instance,"}, {"start": 398.96, "end": 406.32, "text": " we would be able to almost instantly solve this billiard game with just the right amount of force"}, {"start": 406.32, "end": 413.36, "text": " and from the right direction such that the blue ball ends up close to the black spot. Or simulate"}, {"start": 413.36, "end": 420.0, "text": " ink with a checkerboard pattern and exert just the appropriate forces so that it forms exactly"}, {"start": 420.0, "end": 426.96000000000004, "text": " the Yin Yang symbol shortly after. Or, here comes a previous science fiction-like example."}, {"start": 426.96000000000004, "end": 433.44, "text": " This previous differentiable technique adds carefully crafted repos to the water to make sure"}, {"start": 433.44, "end": 440.08000000000004, "text": " that it ends up in a state that distorts the image of the squirrel in a way that a powerful"}, {"start": 440.08, "end": 445.76, "text": " and well-known neural network sees it not as a squirrel, but as a goldfish."}, {"start": 446.56, "end": 453.68, "text": " Wow! And if this new evolutionary technique could do all of these tasks but better and faster,"}, {"start": 454.24, "end": 461.12, "text": " sign me up right now. What a time to be alive! What you see here is a report of this exact paper"}, {"start": 461.12, "end": 466.47999999999996, "text": " we have talked about which was made by weights and biases. I put a link to it in the description,"}, {"start": 466.48, "end": 470.96000000000004, "text": " make sure to have a look, I think it helps you understand this paper better."}, {"start": 470.96000000000004, "end": 476.32, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 476.32, "end": 481.92, "text": " Using their system you can create beautiful reports like this one to explain your findings to"}, {"start": 481.92, "end": 488.72, "text": " your colleagues better. It is used by many prestigious labs including OpenAI, Toyota Research,"}, {"start": 488.72, "end": 495.36, "text": " GitHub and more. And the best part is that weight and biases is free for all individuals,"}, {"start": 495.36, "end": 502.96000000000004, "text": " academics and open source projects. Make sure to visit them through wnb.com slash papers"}, {"start": 502.96000000000004, "end": 508.32, "text": " or just click the link in the video description and you can get a free demo today."}, {"start": 508.32, "end": 513.76, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 513.76, "end": 527.6, "text": " better videos for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=imuarn1A6p8
NVIDIA’s Ray Tracer: Wow, They Nailed It Again! 🤯
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 NVIDIA's paper "Fast Volume Rendering with Spatiotemporal Reservoir Resampling" is available here: https://dqlin.xyz/pubs/2021-sa-VOR/ https://graphics.cs.utah.edu/research/projects/volumetric-restir/ https://research.nvidia.com/publication/2021-11_Fast-Volume-Rendering 🔆 The free light transport course is available here. You'll love it! https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ Volumetric path tracer by michael0884: https://www.shadertoy.com/view/NtXSR4 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photos/volcanic-eruption-ash-cloud-dramatic-1867439/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jona Ifehir. Today we are going to see how NVIDIA's incredible light transport algorithm can render these notoriously difficult photorealistic smoke plumes, volumetric bunnies, and even explosions interactively. We are going to talk about this amazing paper in a moment and given that NVIDIA can already render this absolutely beautiful marble demo in real time, not only with real time light transport, but also with our other favorite around here real time physics. This is just to say that tech transfer is happening here, the papers are real, the papers that you see here really make it to real applications that everyone can use at home. Please note that they won't say exactly what tech is under the hood here, but looking at their best light transport papers might be a good starting point of what is to come in the future. Now, back to the intro. What you are seeing here should not be possible at all. Why is that? Well, when we use a light transport simulation algorithm to create such an image, we get a photorealistic image, but as you see, not immediately. And not even close. It is a miracle of science that with this, we can shoot millions and millions of light rays into the scene to estimate how much light is bouncing around, and initially the inaccuracies in our estimations show up as noise in these images. As we shoot more and more rays, this clears up over time. However, there are two problems. One, this can take hours to finish. And two, in the case of volumetric light transport, these light rays can bounce, scatter, and get absorbed inside a dense medium. In that case, it takes much, much longer. Not only hours, sometimes even days. Oh my goodness. So how did Nvidia pull this off? Well, they built on a previous technique of theirs that is capable of rendering 3.4 million light sources and not just for one image, but it can even handle animation. Just look at all this beautiful footage. Now it would be so great to have a variant of this that works on volumetric effects such as explosions and smoke plumes, but of course, anyone who has tried it before says that there is no chance that those could run interactively. No chance at all. Well, scientists at Nvidia back to differ. Check this out. This is a previous technique and it says 12 SPP. That means 12 samples per pixel, which is more or less simulating 12 light rays for each pixel of these images. That is not a lot of information, so let's have a closer look at what this previous method can do with that. Well, there is still tons of noise and it flickers a lot when we rotate the camera around. Well, I would say that there is not much hope here we would need hundreds or maybe even thousands of samples per pixel to get a clean image. Not 12. And look, it gets even worse. What? No technique runs not thousands, not hundreds and not even 12 samples per pixel in the same amount of time, but really can this really be one sample per pixel? Can you do anything meaningful with that? Well, hold on to your papers and check this out. Wow, look at that. It can do so much better with just one vapor pixel than the previous technique can do with 12. That is absolutely incredible. I really don't know what to say. And it works not just on this example, but on many others as well. This is so far ahead of the previous technique that it seems like science fiction. I absolutely love it. However, yes, I can hear you asking, Karoi, but these are still noisy. Why get so excited for all these incomplete results? Well, do you remember what we did with the previous Nvidia light transport paper? We took these noisy inputs and plugged them into a denoising technique that is specifically designed for light transport algorithms. And it tries to guess what is behind the noise. And as you see, these can help a ton. But, you know, can they help so much that a still noisy input, a meager, one sample per pixel can become usable? Well, let's have a look. Oh my goodness, look at that. The result is clearly not perfect. One light ray per pixel can hardly give us a perfect image, but after denoising, this is unreal. We are almost there right away. It has so much less flickering than the previous technique with much more samples. And we are experienced fellow scholars over here, so let's also check the amount of detail in the image. And, whoa! There is no contest here. This technique also pulls off all this wizardry by trying to reuse information that is otherwise typically thrown away. For instance, with no reuse, we get this baseline result. And if we reuse information from a previous frame in the animation, we get this. That is significantly better than the baseline. If we reuse previous rays, especially, that is also an improvement. So our question is, are these different kinds of improvements? Well, let's add them together. And... Oh yes, now this is what we are here for. Look at how much more information there is in this image. So now, even these amazing volumetric scenes can be rendered interactively, I am out of words. What a time to be alive! And if you feel inspired by these results, I have a free master-level course on Light Transport, where we write a full light simulation program from scratch and learn about physics, the world around us, and more. If you watch it, you will see the world differently. And that is free education for everyone. That's what I want. So, the course is available free of charge for everyone, no strings attached, check it out in the video description. This video has been supported by weights and biases. And being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But, I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper-parameter optimization. No wonder this is the experiment tracking tool choice of OpenAI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.me slash paper intro, or just click the link in the video description and try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jona Ifehir."}, {"start": 5.0, "end": 11.36, "text": " Today we are going to see how NVIDIA's incredible light transport algorithm can render"}, {"start": 11.36, "end": 18.68, "text": " these notoriously difficult photorealistic smoke plumes, volumetric bunnies, and even explosions"}, {"start": 18.68, "end": 20.240000000000002, "text": " interactively."}, {"start": 20.240000000000002, "end": 26.0, "text": " We are going to talk about this amazing paper in a moment and given that NVIDIA can already"}, {"start": 26.0, "end": 33.32, "text": " render this absolutely beautiful marble demo in real time, not only with real time light"}, {"start": 33.32, "end": 40.04, "text": " transport, but also with our other favorite around here real time physics."}, {"start": 40.04, "end": 45.92, "text": " This is just to say that tech transfer is happening here, the papers are real, the papers that"}, {"start": 45.92, "end": 52.08, "text": " you see here really make it to real applications that everyone can use at home."}, {"start": 52.08, "end": 57.879999999999995, "text": " Please note that they won't say exactly what tech is under the hood here, but looking"}, {"start": 57.879999999999995, "end": 62.64, "text": " at their best light transport papers might be a good starting point of what is to come"}, {"start": 62.64, "end": 63.64, "text": " in the future."}, {"start": 63.64, "end": 66.24, "text": " Now, back to the intro."}, {"start": 66.24, "end": 70.44, "text": " What you are seeing here should not be possible at all."}, {"start": 70.44, "end": 71.44, "text": " Why is that?"}, {"start": 71.44, "end": 78.44, "text": " Well, when we use a light transport simulation algorithm to create such an image, we get a photorealistic"}, {"start": 78.44, "end": 82.03999999999999, "text": " image, but as you see, not immediately."}, {"start": 82.04, "end": 83.72000000000001, "text": " And not even close."}, {"start": 83.72000000000001, "end": 89.06, "text": " It is a miracle of science that with this, we can shoot millions and millions of light"}, {"start": 89.06, "end": 96.0, "text": " rays into the scene to estimate how much light is bouncing around, and initially the inaccuracies"}, {"start": 96.0, "end": 100.52000000000001, "text": " in our estimations show up as noise in these images."}, {"start": 100.52000000000001, "end": 105.32000000000001, "text": " As we shoot more and more rays, this clears up over time."}, {"start": 105.32000000000001, "end": 108.88000000000001, "text": " However, there are two problems."}, {"start": 108.88, "end": 112.32, "text": " One, this can take hours to finish."}, {"start": 112.32, "end": 119.19999999999999, "text": " And two, in the case of volumetric light transport, these light rays can bounce, scatter, and"}, {"start": 119.19999999999999, "end": 122.39999999999999, "text": " get absorbed inside a dense medium."}, {"start": 122.39999999999999, "end": 126.36, "text": " In that case, it takes much, much longer."}, {"start": 126.36, "end": 130.0, "text": " Not only hours, sometimes even days."}, {"start": 130.0, "end": 131.72, "text": " Oh my goodness."}, {"start": 131.72, "end": 134.84, "text": " So how did Nvidia pull this off?"}, {"start": 134.84, "end": 142.0, "text": " Well, they built on a previous technique of theirs that is capable of rendering 3.4 million"}, {"start": 142.0, "end": 149.16, "text": " light sources and not just for one image, but it can even handle animation."}, {"start": 149.16, "end": 151.84, "text": " Just look at all this beautiful footage."}, {"start": 151.84, "end": 158.08, "text": " Now it would be so great to have a variant of this that works on volumetric effects such"}, {"start": 158.08, "end": 164.16, "text": " as explosions and smoke plumes, but of course, anyone who has tried it before says that there"}, {"start": 164.16, "end": 167.84, "text": " is no chance that those could run interactively."}, {"start": 167.84, "end": 169.52, "text": " No chance at all."}, {"start": 169.52, "end": 173.35999999999999, "text": " Well, scientists at Nvidia back to differ."}, {"start": 173.35999999999999, "end": 174.6, "text": " Check this out."}, {"start": 174.6, "end": 179.04, "text": " This is a previous technique and it says 12 SPP."}, {"start": 179.04, "end": 185.72, "text": " That means 12 samples per pixel, which is more or less simulating 12 light rays for each"}, {"start": 185.72, "end": 188.04, "text": " pixel of these images."}, {"start": 188.04, "end": 193.07999999999998, "text": " That is not a lot of information, so let's have a closer look at what this previous method"}, {"start": 193.08, "end": 194.52, "text": " can do with that."}, {"start": 194.52, "end": 201.64000000000001, "text": " Well, there is still tons of noise and it flickers a lot when we rotate the camera around."}, {"start": 201.64000000000001, "end": 209.12, "text": " Well, I would say that there is not much hope here we would need hundreds or maybe even thousands"}, {"start": 209.12, "end": 212.8, "text": " of samples per pixel to get a clean image."}, {"start": 212.8, "end": 213.8, "text": " Not 12."}, {"start": 213.8, "end": 217.68, "text": " And look, it gets even worse."}, {"start": 217.68, "end": 218.68, "text": " What?"}, {"start": 218.68, "end": 226.32, "text": " No technique runs not thousands, not hundreds and not even 12 samples per pixel in the same"}, {"start": 226.32, "end": 233.24, "text": " amount of time, but really can this really be one sample per pixel?"}, {"start": 233.24, "end": 235.8, "text": " Can you do anything meaningful with that?"}, {"start": 235.8, "end": 239.92000000000002, "text": " Well, hold on to your papers and check this out."}, {"start": 239.92000000000002, "end": 242.60000000000002, "text": " Wow, look at that."}, {"start": 242.6, "end": 248.88, "text": " It can do so much better with just one vapor pixel than the previous technique can do with"}, {"start": 248.88, "end": 249.88, "text": " 12."}, {"start": 249.88, "end": 252.48, "text": " That is absolutely incredible."}, {"start": 252.48, "end": 254.68, "text": " I really don't know what to say."}, {"start": 254.68, "end": 260.2, "text": " And it works not just on this example, but on many others as well."}, {"start": 260.2, "end": 265.28, "text": " This is so far ahead of the previous technique that it seems like science fiction."}, {"start": 265.28, "end": 267.28, "text": " I absolutely love it."}, {"start": 267.28, "end": 273.47999999999996, "text": " However, yes, I can hear you asking, Karoi, but these are still noisy."}, {"start": 273.47999999999996, "end": 277.4, "text": " Why get so excited for all these incomplete results?"}, {"start": 277.4, "end": 283.35999999999996, "text": " Well, do you remember what we did with the previous Nvidia light transport paper?"}, {"start": 283.35999999999996, "end": 288.96, "text": " We took these noisy inputs and plugged them into a denoising technique that is specifically"}, {"start": 288.96, "end": 292.2, "text": " designed for light transport algorithms."}, {"start": 292.2, "end": 296.59999999999997, "text": " And it tries to guess what is behind the noise."}, {"start": 296.6, "end": 299.72, "text": " And as you see, these can help a ton."}, {"start": 299.72, "end": 307.20000000000005, "text": " But, you know, can they help so much that a still noisy input, a meager, one sample per"}, {"start": 307.20000000000005, "end": 309.56, "text": " pixel can become usable?"}, {"start": 309.56, "end": 311.96000000000004, "text": " Well, let's have a look."}, {"start": 311.96000000000004, "end": 315.40000000000003, "text": " Oh my goodness, look at that."}, {"start": 315.40000000000003, "end": 318.32000000000005, "text": " The result is clearly not perfect."}, {"start": 318.32000000000005, "end": 324.48, "text": " One light ray per pixel can hardly give us a perfect image, but after denoising, this"}, {"start": 324.48, "end": 325.8, "text": " is unreal."}, {"start": 325.8, "end": 328.40000000000003, "text": " We are almost there right away."}, {"start": 328.40000000000003, "end": 334.16, "text": " It has so much less flickering than the previous technique with much more samples."}, {"start": 334.16, "end": 339.84000000000003, "text": " And we are experienced fellow scholars over here, so let's also check the amount of detail"}, {"start": 339.84000000000003, "end": 341.36, "text": " in the image."}, {"start": 341.36, "end": 343.16, "text": " And, whoa!"}, {"start": 343.16, "end": 345.6, "text": " There is no contest here."}, {"start": 345.6, "end": 352.76, "text": " This technique also pulls off all this wizardry by trying to reuse information that is otherwise"}, {"start": 352.76, "end": 354.76, "text": " typically thrown away."}, {"start": 354.76, "end": 358.88, "text": " For instance, with no reuse, we get this baseline result."}, {"start": 358.88, "end": 365.59999999999997, "text": " And if we reuse information from a previous frame in the animation, we get this."}, {"start": 365.59999999999997, "end": 368.88, "text": " That is significantly better than the baseline."}, {"start": 368.88, "end": 374.36, "text": " If we reuse previous rays, especially, that is also an improvement."}, {"start": 374.36, "end": 378.56, "text": " So our question is, are these different kinds of improvements?"}, {"start": 378.56, "end": 381.4, "text": " Well, let's add them together."}, {"start": 381.4, "end": 382.4, "text": " And..."}, {"start": 382.4, "end": 384.71999999999997, "text": " Oh yes, now this is what we are here for."}, {"start": 384.72, "end": 389.0, "text": " Look at how much more information there is in this image."}, {"start": 389.0, "end": 396.20000000000005, "text": " So now, even these amazing volumetric scenes can be rendered interactively, I am out of"}, {"start": 396.20000000000005, "end": 397.28000000000003, "text": " words."}, {"start": 397.28000000000003, "end": 399.40000000000003, "text": " What a time to be alive!"}, {"start": 399.40000000000003, "end": 405.28000000000003, "text": " And if you feel inspired by these results, I have a free master-level course on Light"}, {"start": 405.28000000000003, "end": 411.52000000000004, "text": " Transport, where we write a full light simulation program from scratch and learn about physics,"}, {"start": 411.52000000000004, "end": 413.72, "text": " the world around us, and more."}, {"start": 413.72, "end": 416.40000000000003, "text": " If you watch it, you will see the world differently."}, {"start": 416.40000000000003, "end": 419.68, "text": " And that is free education for everyone."}, {"start": 419.68, "end": 420.68, "text": " That's what I want."}, {"start": 420.68, "end": 426.72, "text": " So, the course is available free of charge for everyone, no strings attached, check it"}, {"start": 426.72, "end": 428.76000000000005, "text": " out in the video description."}, {"start": 428.76000000000005, "end": 431.76000000000005, "text": " This video has been supported by weights and biases."}, {"start": 431.76000000000005, "end": 437.68, "text": " And being a machine learning researcher means doing tons of experiments and, of course,"}, {"start": 437.68, "end": 439.48, "text": " creating tons of data."}, {"start": 439.48, "end": 444.56, "text": " But, I am not looking for data, I am looking for insights."}, {"start": 444.56, "end": 447.72, "text": " And weights and biases helps with exactly that."}, {"start": 447.72, "end": 453.84000000000003, "text": " They have tools for experiment tracking, data set and model versioning, and even hyper-parameter"}, {"start": 453.84000000000003, "end": 455.64000000000004, "text": " optimization."}, {"start": 455.64000000000004, "end": 462.48, "text": " No wonder this is the experiment tracking tool choice of OpenAI, Toyota Research, Samsung,"}, {"start": 462.48, "end": 464.8, "text": " and many more prestigious labs."}, {"start": 464.8, "end": 473.04, "text": " Make sure to use the link WNB.me slash paper intro, or just click the link in the video description"}, {"start": 473.04, "end": 478.88, "text": " and try this 10 minute example of weights and biases today to experience the wonderful"}, {"start": 478.88, "end": 484.88, "text": " feeling of training a neural network and being in control of your experiments."}, {"start": 484.88, "end": 487.12, "text": " After you try it, you won't want to go back."}, {"start": 487.12, "end": 497.12, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8hXsUNr3TXs
DeepMind Takes A Step Towards General AI! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "A Generalist Agent (DeepMind Gato)" is available here: https://www.deepmind.com/publications/a-generalist-agent ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail image: OpenAI DALL-E 2 Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to have a look at DeepMind's new AI Gato, which can do almost everything at the same time. Playing games, controlling robot arm, answering questions, labeling pictures, you name it. And the key is that all this is done by one AI. Now, in the last few years, DeepMind has built AI programs that solved a number of extremely difficult problems. Their chess AI is absolutely incredible as it plays better than any human. Then they proceeded to tackle Go, a game with an even larger space of possible moves with great success as they beat the reigning World Champion. But all of these are separate programs. They share some components, but all of them require a significant amount of engineering to tailor to the problem at hand. For instance, their amazing Alpha Fold AI contains a lot of extra components that give it information about protein folding. Hence, it cannot be reused for other tasks as is. Their Starcraft 2 AI is also at the very least on the level of human grandmasters. This is also a separate AI. Now, of course, DeepMind did not spend years of hard work to build an AI that could play just video games. So, why do all this? Well, they use video games as an excellent test bed for something even greater. Their goal is to write a general AI that is the one true algorithm that can do it all. In their mission statement, they often say step number one is to solve general intelligence and step number two use it to solve everything else. And now, hold onto your papers and I cannot believe that I am saying this, but here is their newest AI that takes a solid step in this direction. So, what can it do? Well, for instance, it can label images. And these are not some easy images. I like how it doesn't just say that here are some children and slices of pizza. It says that we have a group of children who are eating pizza. Now, the captions are not perfect. We will get back to exactly how good this AI is in a moment. Okay, so, what else? It can also chat with us. We can ask it to recommend books. Ask it why a particular recommended book is interesting. Ask it about black holes protein folding. You name it. It does really well. But note that it can still be factually incorrect even on simpler questions. And now, with this one, we are getting there, it can also control this robot hand. It was shown how to stack the red block onto the blue one. But now, we ask it to put the blue one on the green block. This is something that it hasn't been shown before. So, can it do it? Oh yes, loving it. Great job, little robot. So, these were three examples. And now, we have to skip forward a little bit. Why is that? Well, we have to skip because I cannot go through everything that it can do. And now, if you have been holding onto your paper so far, squeeze that paper because it can perform more than 600 tasks. 600? My goodness. How is that even possible? Well, the key here is that this is a neural network where we can dump in all kinds of data at the same time. Look, we can train it on novels, images, and questions about these images, Atari game videos and controller actions, and even all these cool robot arm data. And look, the colors show that all these data can be used to train their system. This is truly just one AI that can perform all of these tasks. Okay, and now comes the most important question. How good is it at these tasks? And this is when I fell off the chair when I read this paper. Just look at this. What in the world? It is at least half as good as a human expert in about 450 out of about 600 tasks. And it is as good as a human expert in about a quarter of these tasks. That is mind blowing. And note that once again, the best part of this work is that we don't need 600 different techniques to solve these 600 tasks. We just need one generalist AI that does it all. What a time to be alive. Now, clearly, it is not perfect, not even close. However, this is two minute papers. This is the land of fellow scholars. So we will apply the first law of papers which says that research is a process. Do not look at where we are. Look at where we will be. Two more papers down the line. So how do we do that? Well, we look at this chart and... Oh my. Can this really be true? This chart says that you have seen nothing yet. What does all this mean? Well, this means that as we increase the size of the neural network, we still see consistent growth in its capabilities. This means that deep-mide is just getting started with this. The next iteration will be way better. Just one or two more papers down the line and these inaccuracies that you have seen earlier might become a distant memory. By then, we might ask, remember when deep-mide's AI answered something incorrectly? Oh yeah, that was a completely different world back then because it was just two papers before this one. This one really keeps me up at night. What an incredible achievement. So, does this get your mind going? What would you use this AI for? Let me know in the comments below. Wates and biases provide tools to track your experiments in your deep learning projects. What you see here is their table's feature and the best part about it is that it is not only able to handle pretty much any kind of data you can throw at it, but it also presents your experiments to you in a way that is easy to understand. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wates and Biasis is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wates and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 5.0, "end": 14.0, "text": " Today we are going to have a look at DeepMind's new AI Gato, which can do almost everything at the same time."}, {"start": 14.0, "end": 21.0, "text": " Playing games, controlling robot arm, answering questions, labeling pictures, you name it."}, {"start": 21.0, "end": 25.0, "text": " And the key is that all this is done by one AI."}, {"start": 25.0, "end": 33.0, "text": " Now, in the last few years, DeepMind has built AI programs that solved a number of extremely difficult problems."}, {"start": 33.0, "end": 39.0, "text": " Their chess AI is absolutely incredible as it plays better than any human."}, {"start": 39.0, "end": 50.0, "text": " Then they proceeded to tackle Go, a game with an even larger space of possible moves with great success as they beat the reigning World Champion."}, {"start": 50.0, "end": 61.0, "text": " But all of these are separate programs. They share some components, but all of them require a significant amount of engineering to tailor to the problem at hand."}, {"start": 61.0, "end": 70.0, "text": " For instance, their amazing Alpha Fold AI contains a lot of extra components that give it information about protein folding."}, {"start": 70.0, "end": 74.0, "text": " Hence, it cannot be reused for other tasks as is."}, {"start": 74.0, "end": 81.0, "text": " Their Starcraft 2 AI is also at the very least on the level of human grandmasters."}, {"start": 81.0, "end": 84.0, "text": " This is also a separate AI."}, {"start": 84.0, "end": 92.0, "text": " Now, of course, DeepMind did not spend years of hard work to build an AI that could play just video games."}, {"start": 92.0, "end": 94.0, "text": " So, why do all this?"}, {"start": 94.0, "end": 100.0, "text": " Well, they use video games as an excellent test bed for something even greater."}, {"start": 100.0, "end": 107.0, "text": " Their goal is to write a general AI that is the one true algorithm that can do it all."}, {"start": 107.0, "end": 117.0, "text": " In their mission statement, they often say step number one is to solve general intelligence and step number two use it to solve everything else."}, {"start": 117.0, "end": 129.0, "text": " And now, hold onto your papers and I cannot believe that I am saying this, but here is their newest AI that takes a solid step in this direction."}, {"start": 129.0, "end": 131.0, "text": " So, what can it do?"}, {"start": 131.0, "end": 135.0, "text": " Well, for instance, it can label images."}, {"start": 135.0, "end": 138.0, "text": " And these are not some easy images."}, {"start": 138.0, "end": 143.0, "text": " I like how it doesn't just say that here are some children and slices of pizza."}, {"start": 143.0, "end": 148.0, "text": " It says that we have a group of children who are eating pizza."}, {"start": 148.0, "end": 151.0, "text": " Now, the captions are not perfect."}, {"start": 151.0, "end": 155.0, "text": " We will get back to exactly how good this AI is in a moment."}, {"start": 155.0, "end": 160.0, "text": " Okay, so, what else? It can also chat with us."}, {"start": 160.0, "end": 163.0, "text": " We can ask it to recommend books."}, {"start": 163.0, "end": 167.0, "text": " Ask it why a particular recommended book is interesting."}, {"start": 167.0, "end": 171.0, "text": " Ask it about black holes protein folding. You name it."}, {"start": 171.0, "end": 173.0, "text": " It does really well."}, {"start": 173.0, "end": 179.0, "text": " But note that it can still be factually incorrect even on simpler questions."}, {"start": 179.0, "end": 185.0, "text": " And now, with this one, we are getting there, it can also control this robot hand."}, {"start": 185.0, "end": 189.0, "text": " It was shown how to stack the red block onto the blue one."}, {"start": 189.0, "end": 194.0, "text": " But now, we ask it to put the blue one on the green block."}, {"start": 194.0, "end": 197.0, "text": " This is something that it hasn't been shown before."}, {"start": 197.0, "end": 199.0, "text": " So, can it do it?"}, {"start": 199.0, "end": 203.0, "text": " Oh yes, loving it. Great job, little robot."}, {"start": 203.0, "end": 205.0, "text": " So, these were three examples."}, {"start": 205.0, "end": 209.0, "text": " And now, we have to skip forward a little bit."}, {"start": 209.0, "end": 216.0, "text": " Why is that? Well, we have to skip because I cannot go through everything that it can do."}, {"start": 216.0, "end": 221.0, "text": " And now, if you have been holding onto your paper so far, squeeze that paper"}, {"start": 221.0, "end": 225.0, "text": " because it can perform more than 600 tasks."}, {"start": 225.0, "end": 230.0, "text": " 600? My goodness. How is that even possible?"}, {"start": 230.0, "end": 238.0, "text": " Well, the key here is that this is a neural network where we can dump in all kinds of data at the same time."}, {"start": 238.0, "end": 244.0, "text": " Look, we can train it on novels, images, and questions about these images,"}, {"start": 244.0, "end": 250.0, "text": " Atari game videos and controller actions, and even all these cool robot arm data."}, {"start": 250.0, "end": 256.0, "text": " And look, the colors show that all these data can be used to train their system."}, {"start": 256.0, "end": 262.0, "text": " This is truly just one AI that can perform all of these tasks."}, {"start": 262.0, "end": 266.0, "text": " Okay, and now comes the most important question."}, {"start": 266.0, "end": 272.0, "text": " How good is it at these tasks? And this is when I fell off the chair when I read this paper."}, {"start": 272.0, "end": 274.0, "text": " Just look at this."}, {"start": 274.0, "end": 276.0, "text": " What in the world?"}, {"start": 276.0, "end": 285.0, "text": " It is at least half as good as a human expert in about 450 out of about 600 tasks."}, {"start": 285.0, "end": 291.0, "text": " And it is as good as a human expert in about a quarter of these tasks."}, {"start": 291.0, "end": 293.0, "text": " That is mind blowing."}, {"start": 293.0, "end": 301.0, "text": " And note that once again, the best part of this work is that we don't need 600 different techniques to solve these 600 tasks."}, {"start": 301.0, "end": 305.0, "text": " We just need one generalist AI that does it all."}, {"start": 305.0, "end": 307.0, "text": " What a time to be alive."}, {"start": 307.0, "end": 311.0, "text": " Now, clearly, it is not perfect, not even close."}, {"start": 311.0, "end": 316.0, "text": " However, this is two minute papers. This is the land of fellow scholars."}, {"start": 316.0, "end": 322.0, "text": " So we will apply the first law of papers which says that research is a process."}, {"start": 322.0, "end": 324.0, "text": " Do not look at where we are."}, {"start": 324.0, "end": 328.0, "text": " Look at where we will be. Two more papers down the line."}, {"start": 328.0, "end": 330.0, "text": " So how do we do that?"}, {"start": 330.0, "end": 332.0, "text": " Well, we look at this chart and..."}, {"start": 332.0, "end": 334.0, "text": " Oh my."}, {"start": 334.0, "end": 335.0, "text": " Can this really be true?"}, {"start": 335.0, "end": 339.0, "text": " This chart says that you have seen nothing yet."}, {"start": 339.0, "end": 345.0, "text": " What does all this mean? Well, this means that as we increase the size of the neural network,"}, {"start": 345.0, "end": 349.0, "text": " we still see consistent growth in its capabilities."}, {"start": 349.0, "end": 352.0, "text": " This means that deep-mide is just getting started with this."}, {"start": 352.0, "end": 355.0, "text": " The next iteration will be way better."}, {"start": 355.0, "end": 363.0, "text": " Just one or two more papers down the line and these inaccuracies that you have seen earlier might become a distant memory."}, {"start": 363.0, "end": 369.0, "text": " By then, we might ask, remember when deep-mide's AI answered something incorrectly?"}, {"start": 369.0, "end": 376.0, "text": " Oh yeah, that was a completely different world back then because it was just two papers before this one."}, {"start": 376.0, "end": 379.0, "text": " This one really keeps me up at night."}, {"start": 379.0, "end": 381.0, "text": " What an incredible achievement."}, {"start": 381.0, "end": 384.0, "text": " So, does this get your mind going?"}, {"start": 384.0, "end": 386.0, "text": " What would you use this AI for?"}, {"start": 386.0, "end": 388.0, "text": " Let me know in the comments below."}, {"start": 388.0, "end": 393.0, "text": " Wates and biases provide tools to track your experiments in your deep learning projects."}, {"start": 393.0, "end": 403.0, "text": " What you see here is their table's feature and the best part about it is that it is not only able to handle pretty much any kind of data you can throw at it,"}, {"start": 403.0, "end": 410.0, "text": " but it also presents your experiments to you in a way that is easy to understand."}, {"start": 410.0, "end": 416.0, "text": " It is used by many prestigious labs including OpenAI, Toyota Research, GitHub, and more."}, {"start": 416.0, "end": 424.0, "text": " And the best part is that Wates and Biasis is free for all individuals, academics, and open source projects."}, {"start": 424.0, "end": 434.0, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today."}, {"start": 434.0, "end": 441.0, "text": " Our thanks to Wates and Biasis for their long-standing support and for helping us make better videos for you."}, {"start": 441.0, "end": 446.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mtkBIOK_28A
OpenAI DALL-E 2 - AI or Artist? Which is Better? 🧑‍🎨
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://openai.com/dall-e-2/ 🕊️ Follow us for more results on Twitter! https://twitter.com/twominutepapers 🧑‍🎨 Check out Felícia Zsolnai-Fehér's works: https://www.instagram.com/feliciart_86/ 🧑‍🎨 Judit Somogyvári's works: https://www.artstation.com/sheyenne https://www.instagram.com/somogyvari.art/ 🎤 Beardyman (note: explicit): https://www.youtube.com/watch?v=-I2RikzeFbA 🍩Andrew Price, Master of Donuts: https://www.youtube.com/watch?v=jv3GijkzIdk 😸Grumpy Cat ready for takeoff: https://twitter.com/FabianMosele/status/1534560293069676546 🦊Fox repaint: https://twitter.com/eLTehH/status/1533724395281481729 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail image was created with DALL-E 2. Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 00:00 What is DALL-E 2? 00:54 Team 🧑‍🎨 vs Team 🤖 - Round 1 02:36 Team 🧑‍🎨 vs Team 🤖 - Round 2 03:37 Team 🧑‍🎨 vs Team 🤖 - Round 3 04:29 Team 🧑‍🎨 vs Team 🤖 - Round 4 05:14 Team 🧑‍🎨 vs Team 🤖 - Round 5 05:41 An important lesson 06:00 Grumpy Cat in space! 06:36 Fox Scientist 07:00 Platypus experiment 07:28 A visual pun 07:52 A time traveler 08:10 All hail the mighty Cyberfrog! 08:19 Rosie the Paper Dog 08:40 An image of you Fellow Scholars! 09:12 History in the making Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAI #dalle
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to see how open AI's Dolly2AI's images stack up to real works from real artists. This is going to be an amazing journey. So, hold on tight to your papers. Dolly2 is an AI that took the world by storm in the last few months. What does it do? Well, we can write it, attack description, and hand it to the AI that creates an amazing image of exactly that. And now, I am lucky to say that yes, I have ascended. Thanks to the generosity of open AI, I now have access to Dolly2. And my first thought was that if it can come up with high quality and truly novel images, let's put those to the real test. In other words, see how they stand up against the works of real artists. Our first subject will be my favorite artist in the world, and that would be my wife Felizia Zonai-Fehir. She drew this chimpanzee. Oh my, that is beautiful. So, let's try this against Dolly2. Okay, but how? Well, the first way of trying it will be through a variant generation. What is that? Well, since this AI is trained to understand the connection between text and images, the understanding goes both ways. This means that we can not only input text and get an image, but input an image too. Does it understand what is depicted in this image? Let's see. Oh yes. Wow, it not only understands what is going on, but it can demonstrate that by creating really good variants of it. But not nearly as good as Felizia, I have to say. Let's give it another chance. Here, I used a text prompt to get a hand-run chimp looking to the right. And I asked for lots and lots of detail. I have pleaded and pleaded for extra wrinkles to squeeze everything out of the AI, even cherry-pick the results, and these were the best I could get. You know, art is highly subjective. That's what makes it what it is. But for now, I have to say, team humanity, team AI, one zero. And at this point, I thought this is going to be easy. Easy, easy, easy. And boy, you'll find out in a moment how wrong I was. Here is a work from one of my other favorite artists, Yudit Shomujwari. Wow, that is one of her original characters. Very imaginative. I love it. So, let's ask for a variant of that. Does the AI understand what is going on here? You know, there's a lot to take in here, half ears, makeup, and the... Wow! I cannot believe that. I am lucky that the variants are working out so well because writing a prompt for this would be quite a challenge. I am truly out of words. These are so good. But not as good as Yudit, in my opinion. Team humanity, team AI, two zero. That's a good score, but it is getting harder. Now, let's look at one of Yudit's other works, an ostrich that is also a Formula One racer. I am loving this, especially the helmet and the goggles. I think I can whip up a good prompt for this. So, what did the AI come up with? Holy matter of papers. Is this some kind of joke? It cannot be that good. I check the images over and over again. It is that good. Wow! Now, of course, scoring this is subjective, but clearly, both are excellent. I'll give this one a tie. Team humanity versus Team AI, three one. I told you, it is getting harder. Make sure to check out both Yudit and Felicia's works. The links are available in the video description. And now, for the next test, please give a warm welcome to the King of all donuts. Of course, that is the great Andrew Price, 3D modeler and teacher extraordinaire. Now, here are some of the donuts I came up with using Dolly too. And some from the prompt that I ran for my daughter. Loving the Googli eyes there, by the way. So these are some great donuts. Now, Andrew, it's your turn. Let's see. Wow! Open AI's was a good one, but this. This is a Michelin star donut right there. The AI made ones are no match for Andrew's skills. So, Team humanity versus Team AI, four one. Now note that Dolly too often isn't as good as a real world-class artist, but the AI can create 10,000 of these images a day. That must be worth at least a point. So, four two. Whew! We made it. Now, all this testing is great fun, and of course, highly subjective. So, don't take it too seriously. And here comes the even more important lesson. In my opinion, this tool should not be framed at all as Team Humanity versus Team AI. I believe they should be framed as Team Humanity supercharged by Team AI. Here is why through six inspiring examples. Look, first, this is my second favorite prompt ever, a grumpy cat in a spaceship. That is incredible. Now, almost as soon as I posted it on Twitter, it inspired Fabian to create an animated version of it. And now, our adorable little grump is ready for takeoff. Through the ingenuity of Team Human and Team AI together. So cool. What a time to be alive. So, I hear you asking if this was my second favorite. What is the first? Well, two have a look at this Fox scientist. I cannot believe how well this came out. It has tons of personality. And it also inspired others to repaint it. Here it is. You see, I don't see this as a competition. I see this as a collaboration. Three, it also inspired the great beardy man who wanted to see a Platypus studying the schematics for a cold fusion reactor. These are absolutely amazing. Not just the quality, but the variety of the results. My goodness. Note that he asked for a one-eyed Platypus, but this iteration of Dolly is not that good with numbers. Yet, if it could talk, it would probably apologize for it. Four, I also ran a prompt for David Brown from Boy in a Band which got real interesting, real quick. He was looking for a visual pun. What is that? Is this a pen? Is this the pun that we got a pen instead? I am not sure. But this is very cool nonetheless. And I also wonder what the AI is trying to say here. Five, James from Liner's Tech Tips also ran a superb prompt, a time traveler insensitive to humanity's trivialities. What a great idea. Oh my, loving the framing and the thoughtfulness of these results. And finally, six, it inspired me too. I also made this. All hail the mighty cyber frog. Loving it. And we also have a tiny chihuahua in our home and in the two-minute paper's labs. She is Rosie, the paper dog, if you will. This is how we see her. And now we can also show how she sees herself when meeting other dogs. Oh yes, I think that is very accurate. And of course, as always, plus one. You didn't think we would leave without looking at some of you fellow scholars holding onto your papers, did you? Well, here you go. And as you see, these are some proper papers. You fellow scholars are loving it. But now let's have an even more explosive paper like Dolly too. Oh yes, now we're talking. This scholar is you who knows that this paper is history in the making. So much so that I cannot stop thinking about and playing with this thing. It really keeps me up at night. This is an incredible achievement that is going to transform the world around us. And fast. So artists, AI people, what do you think? Which results did you like better? Let me know in the comments below. And also I will be on Twitter taking some requests from my followers, so make sure to follow me there and send some prompts. The best ones may appear in a future episode here. The link is in the description. If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold onto your papers because with Lambda GPU cloud, you can get on demand A 100 instances for $1.10 per hour versus $4.10 per hour with AWS. That's 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.76, "end": 14.64, "text": " Today we are going to see how open AI's Dolly2AI's images stack up to real works from real artists."}, {"start": 14.64, "end": 17.6, "text": " This is going to be an amazing journey."}, {"start": 17.6, "end": 25.52, "text": " So, hold on tight to your papers. Dolly2 is an AI that took the world by storm in the last few months."}, {"start": 25.52, "end": 35.76, "text": " What does it do? Well, we can write it, attack description, and hand it to the AI that creates an amazing image of exactly that."}, {"start": 35.76, "end": 40.56, "text": " And now, I am lucky to say that yes, I have ascended."}, {"start": 40.56, "end": 46.32, "text": " Thanks to the generosity of open AI, I now have access to Dolly2."}, {"start": 46.32, "end": 55.92, "text": " And my first thought was that if it can come up with high quality and truly novel images, let's put those to the real test."}, {"start": 55.92, "end": 61.36, "text": " In other words, see how they stand up against the works of real artists."}, {"start": 61.36, "end": 69.44, "text": " Our first subject will be my favorite artist in the world, and that would be my wife Felizia Zonai-Fehir."}, {"start": 69.44, "end": 71.92, "text": " She drew this chimpanzee."}, {"start": 71.92, "end": 74.8, "text": " Oh my, that is beautiful."}, {"start": 74.8, "end": 78.39999999999999, "text": " So, let's try this against Dolly2."}, {"start": 78.39999999999999, "end": 80.96, "text": " Okay, but how?"}, {"start": 80.96, "end": 86.39999999999999, "text": " Well, the first way of trying it will be through a variant generation."}, {"start": 86.39999999999999, "end": 87.84, "text": " What is that?"}, {"start": 87.84, "end": 97.84, "text": " Well, since this AI is trained to understand the connection between text and images, the understanding goes both ways."}, {"start": 97.84, "end": 104.32, "text": " This means that we can not only input text and get an image, but input an image too."}, {"start": 104.32, "end": 107.91999999999999, "text": " Does it understand what is depicted in this image?"}, {"start": 107.91999999999999, "end": 109.11999999999999, "text": " Let's see."}, {"start": 109.11999999999999, "end": 110.63999999999999, "text": " Oh yes."}, {"start": 110.63999999999999, "end": 119.52, "text": " Wow, it not only understands what is going on, but it can demonstrate that by creating really good variants of it."}, {"start": 119.52, "end": 123.83999999999999, "text": " But not nearly as good as Felizia, I have to say."}, {"start": 123.83999999999999, "end": 125.67999999999999, "text": " Let's give it another chance."}, {"start": 125.67999999999999, "end": 131.35999999999999, "text": " Here, I used a text prompt to get a hand-run chimp looking to the right."}, {"start": 131.36, "end": 134.8, "text": " And I asked for lots and lots of detail."}, {"start": 134.8, "end": 140.72000000000003, "text": " I have pleaded and pleaded for extra wrinkles to squeeze everything out of the AI,"}, {"start": 140.72000000000003, "end": 145.52, "text": " even cherry-pick the results, and these were the best I could get."}, {"start": 145.52, "end": 148.16000000000003, "text": " You know, art is highly subjective."}, {"start": 148.16000000000003, "end": 150.16000000000003, "text": " That's what makes it what it is."}, {"start": 150.16000000000003, "end": 156.48000000000002, "text": " But for now, I have to say, team humanity, team AI, one zero."}, {"start": 156.48000000000002, "end": 160.8, "text": " And at this point, I thought this is going to be easy."}, {"start": 160.8, "end": 162.48000000000002, "text": " Easy, easy, easy."}, {"start": 162.48000000000002, "end": 167.36, "text": " And boy, you'll find out in a moment how wrong I was."}, {"start": 167.36, "end": 173.12, "text": " Here is a work from one of my other favorite artists, Yudit Shomujwari."}, {"start": 173.12, "end": 176.72, "text": " Wow, that is one of her original characters."}, {"start": 176.72, "end": 178.32000000000002, "text": " Very imaginative."}, {"start": 178.32000000000002, "end": 179.68, "text": " I love it."}, {"start": 179.68, "end": 182.96, "text": " So, let's ask for a variant of that."}, {"start": 182.96, "end": 186.24, "text": " Does the AI understand what is going on here?"}, {"start": 186.24, "end": 191.20000000000002, "text": " You know, there's a lot to take in here, half ears, makeup, and the..."}, {"start": 191.20000000000002, "end": 192.4, "text": " Wow!"}, {"start": 192.4, "end": 194.4, "text": " I cannot believe that."}, {"start": 194.4, "end": 198.08, "text": " I am lucky that the variants are working out so well"}, {"start": 198.08, "end": 201.68, "text": " because writing a prompt for this would be quite a challenge."}, {"start": 201.68, "end": 204.4, "text": " I am truly out of words."}, {"start": 204.4, "end": 206.4, "text": " These are so good."}, {"start": 206.4, "end": 210.16000000000003, "text": " But not as good as Yudit, in my opinion."}, {"start": 210.16000000000003, "end": 213.76000000000002, "text": " Team humanity, team AI, two zero."}, {"start": 213.76, "end": 217.44, "text": " That's a good score, but it is getting harder."}, {"start": 217.44, "end": 220.72, "text": " Now, let's look at one of Yudit's other works,"}, {"start": 220.72, "end": 225.04, "text": " an ostrich that is also a Formula One racer."}, {"start": 225.04, "end": 229.76, "text": " I am loving this, especially the helmet and the goggles."}, {"start": 229.76, "end": 233.2, "text": " I think I can whip up a good prompt for this."}, {"start": 233.2, "end": 236.56, "text": " So, what did the AI come up with?"}, {"start": 236.56, "end": 238.72, "text": " Holy matter of papers."}, {"start": 238.72, "end": 240.39999999999998, "text": " Is this some kind of joke?"}, {"start": 240.39999999999998, "end": 242.48, "text": " It cannot be that good."}, {"start": 242.48, "end": 245.2, "text": " I check the images over and over again."}, {"start": 245.2, "end": 246.88, "text": " It is that good."}, {"start": 246.88, "end": 247.92, "text": " Wow!"}, {"start": 247.92, "end": 251.2, "text": " Now, of course, scoring this is subjective,"}, {"start": 251.2, "end": 254.0, "text": " but clearly, both are excellent."}, {"start": 254.0, "end": 255.92, "text": " I'll give this one a tie."}, {"start": 255.92, "end": 259.84, "text": " Team humanity versus Team AI, three one."}, {"start": 259.84, "end": 262.56, "text": " I told you, it is getting harder."}, {"start": 262.56, "end": 266.15999999999997, "text": " Make sure to check out both Yudit and Felicia's works."}, {"start": 266.15999999999997, "end": 269.36, "text": " The links are available in the video description."}, {"start": 269.36, "end": 273.44, "text": " And now, for the next test, please give a warm welcome"}, {"start": 273.44, "end": 276.08000000000004, "text": " to the King of all donuts."}, {"start": 276.08000000000004, "end": 279.28000000000003, "text": " Of course, that is the great Andrew Price,"}, {"start": 279.28000000000003, "end": 282.64, "text": " 3D modeler and teacher extraordinaire."}, {"start": 282.64, "end": 287.52000000000004, "text": " Now, here are some of the donuts I came up with using Dolly too."}, {"start": 287.52000000000004, "end": 291.52000000000004, "text": " And some from the prompt that I ran for my daughter."}, {"start": 291.52000000000004, "end": 293.92, "text": " Loving the Googli eyes there, by the way."}, {"start": 293.92, "end": 296.48, "text": " So these are some great donuts."}, {"start": 296.48, "end": 299.12, "text": " Now, Andrew, it's your turn."}, {"start": 299.12, "end": 300.72, "text": " Let's see."}, {"start": 300.72, "end": 301.68, "text": " Wow!"}, {"start": 301.68, "end": 304.56, "text": " Open AI's was a good one, but this."}, {"start": 304.56, "end": 307.68, "text": " This is a Michelin star donut right there."}, {"start": 307.68, "end": 311.28000000000003, "text": " The AI made ones are no match for Andrew's skills."}, {"start": 311.28000000000003, "end": 315.68, "text": " So, Team humanity versus Team AI, four one."}, {"start": 315.68, "end": 319.92, "text": " Now note that Dolly too often isn't as good as a real"}, {"start": 319.92, "end": 323.68, "text": " world-class artist, but the AI can create"}, {"start": 323.68, "end": 326.72, "text": " 10,000 of these images a day."}, {"start": 326.72, "end": 329.6, "text": " That must be worth at least a point."}, {"start": 329.6, "end": 331.84000000000003, "text": " So, four two."}, {"start": 331.84000000000003, "end": 332.8, "text": " Whew!"}, {"start": 332.8, "end": 333.92, "text": " We made it."}, {"start": 333.92, "end": 336.56, "text": " Now, all this testing is great fun,"}, {"start": 336.56, "end": 338.88000000000005, "text": " and of course, highly subjective."}, {"start": 338.88000000000005, "end": 341.52000000000004, "text": " So, don't take it too seriously."}, {"start": 341.52000000000004, "end": 344.8, "text": " And here comes the even more important lesson."}, {"start": 344.8, "end": 348.64000000000004, "text": " In my opinion, this tool should not be framed at all"}, {"start": 348.64000000000004, "end": 351.68, "text": " as Team Humanity versus Team AI."}, {"start": 351.68, "end": 355.12, "text": " I believe they should be framed as Team Humanity"}, {"start": 355.12, "end": 357.76, "text": " supercharged by Team AI."}, {"start": 357.76, "end": 361.28000000000003, "text": " Here is why through six inspiring examples."}, {"start": 361.28000000000003, "end": 365.68, "text": " Look, first, this is my second favorite prompt ever,"}, {"start": 365.68, "end": 368.8, "text": " a grumpy cat in a spaceship."}, {"start": 368.8, "end": 370.88, "text": " That is incredible."}, {"start": 370.88, "end": 374.4, "text": " Now, almost as soon as I posted it on Twitter,"}, {"start": 374.4, "end": 378.88, "text": " it inspired Fabian to create an animated version of it."}, {"start": 378.88, "end": 383.2, "text": " And now, our adorable little grump is ready for takeoff."}, {"start": 383.2, "end": 388.24, "text": " Through the ingenuity of Team Human and Team AI together."}, {"start": 388.24, "end": 389.52, "text": " So cool."}, {"start": 389.52, "end": 391.52, "text": " What a time to be alive."}, {"start": 391.52, "end": 395.76, "text": " So, I hear you asking if this was my second favorite."}, {"start": 395.76, "end": 397.52, "text": " What is the first?"}, {"start": 397.52, "end": 401.84, "text": " Well, two have a look at this Fox scientist."}, {"start": 401.84, "end": 404.96, "text": " I cannot believe how well this came out."}, {"start": 404.96, "end": 407.59999999999997, "text": " It has tons of personality."}, {"start": 407.59999999999997, "end": 411.91999999999996, "text": " And it also inspired others to repaint it."}, {"start": 411.92, "end": 413.12, "text": " Here it is."}, {"start": 413.12, "end": 416.08000000000004, "text": " You see, I don't see this as a competition."}, {"start": 416.08000000000004, "end": 418.56, "text": " I see this as a collaboration."}, {"start": 418.56, "end": 422.40000000000003, "text": " Three, it also inspired the great beardy man"}, {"start": 422.40000000000003, "end": 426.32, "text": " who wanted to see a Platypus studying the schematics"}, {"start": 426.32, "end": 429.04, "text": " for a cold fusion reactor."}, {"start": 429.04, "end": 431.76, "text": " These are absolutely amazing."}, {"start": 431.76, "end": 435.84000000000003, "text": " Not just the quality, but the variety of the results."}, {"start": 435.84000000000003, "end": 437.12, "text": " My goodness."}, {"start": 437.12, "end": 440.16, "text": " Note that he asked for a one-eyed Platypus,"}, {"start": 440.16, "end": 444.48, "text": " but this iteration of Dolly is not that good with numbers."}, {"start": 444.48, "end": 448.88000000000005, "text": " Yet, if it could talk, it would probably apologize for it."}, {"start": 448.88000000000005, "end": 453.36, "text": " Four, I also ran a prompt for David Brown from Boy in a Band"}, {"start": 453.36, "end": 456.40000000000003, "text": " which got real interesting, real quick."}, {"start": 456.40000000000003, "end": 459.04, "text": " He was looking for a visual pun."}, {"start": 459.04, "end": 460.56, "text": " What is that?"}, {"start": 460.56, "end": 462.08000000000004, "text": " Is this a pen?"}, {"start": 462.08000000000004, "end": 465.20000000000005, "text": " Is this the pun that we got a pen instead?"}, {"start": 465.20000000000005, "end": 466.88, "text": " I am not sure."}, {"start": 466.88, "end": 469.68, "text": " But this is very cool nonetheless."}, {"start": 469.68, "end": 473.92, "text": " And I also wonder what the AI is trying to say here."}, {"start": 473.92, "end": 478.96, "text": " Five, James from Liner's Tech Tips also ran a superb prompt,"}, {"start": 478.96, "end": 483.36, "text": " a time traveler insensitive to humanity's trivialities."}, {"start": 483.36, "end": 485.44, "text": " What a great idea."}, {"start": 485.44, "end": 490.16, "text": " Oh my, loving the framing and the thoughtfulness of these results."}, {"start": 490.16, "end": 494.0, "text": " And finally, six, it inspired me too."}, {"start": 494.0, "end": 496.24, "text": " I also made this."}, {"start": 496.24, "end": 499.12, "text": " All hail the mighty cyber frog."}, {"start": 499.12, "end": 500.4, "text": " Loving it."}, {"start": 500.4, "end": 503.84000000000003, "text": " And we also have a tiny chihuahua in our home"}, {"start": 503.84000000000003, "end": 506.12, "text": " and in the two-minute paper's labs."}, {"start": 506.12, "end": 509.48, "text": " She is Rosie, the paper dog, if you will."}, {"start": 509.48, "end": 511.52, "text": " This is how we see her."}, {"start": 511.52, "end": 515.48, "text": " And now we can also show how she sees herself"}, {"start": 515.48, "end": 517.8, "text": " when meeting other dogs."}, {"start": 517.8, "end": 521.92, "text": " Oh yes, I think that is very accurate."}, {"start": 521.92, "end": 525.64, "text": " And of course, as always, plus one."}, {"start": 525.64, "end": 530.52, "text": " You didn't think we would leave without looking at some of you fellow scholars"}, {"start": 530.52, "end": 532.92, "text": " holding onto your papers, did you?"}, {"start": 532.92, "end": 534.76, "text": " Well, here you go."}, {"start": 534.76, "end": 538.68, "text": " And as you see, these are some proper papers."}, {"start": 538.68, "end": 541.24, "text": " You fellow scholars are loving it."}, {"start": 541.24, "end": 546.52, "text": " But now let's have an even more explosive paper like Dolly too."}, {"start": 546.52, "end": 549.0, "text": " Oh yes, now we're talking."}, {"start": 549.0, "end": 554.48, "text": " This scholar is you who knows that this paper is history in the making."}, {"start": 554.48, "end": 559.9200000000001, "text": " So much so that I cannot stop thinking about and playing with this thing."}, {"start": 559.9200000000001, "end": 562.08, "text": " It really keeps me up at night."}, {"start": 562.08, "end": 567.32, "text": " This is an incredible achievement that is going to transform the world around us."}, {"start": 567.32, "end": 568.72, "text": " And fast."}, {"start": 568.72, "end": 572.6, "text": " So artists, AI people, what do you think?"}, {"start": 572.6, "end": 574.88, "text": " Which results did you like better?"}, {"start": 574.88, "end": 577.04, "text": " Let me know in the comments below."}, {"start": 577.04, "end": 582.72, "text": " And also I will be on Twitter taking some requests from my followers, so make sure to follow"}, {"start": 582.72, "end": 585.36, "text": " me there and send some prompts."}, {"start": 585.36, "end": 588.76, "text": " The best ones may appear in a future episode here."}, {"start": 588.76, "end": 590.6, "text": " The link is in the description."}, {"start": 590.6, "end": 597.84, "text": " If you're looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in"}, {"start": 597.84, "end": 601.0, "text": " the world for GPU cloud compute."}, {"start": 601.0, "end": 603.9200000000001, "text": " No commitments or negotiation required."}, {"start": 603.9200000000001, "end": 607.08, "text": " Just sign up and launch an instance."}, {"start": 607.08, "end": 612.6800000000001, "text": " And hold onto your papers because with Lambda GPU cloud, you can get on demand"}, {"start": 612.68, "end": 621.28, "text": " A 100 instances for $1.10 per hour versus $4.10 per hour with AWS."}, {"start": 621.28, "end": 624.0799999999999, "text": " That's 73% savings."}, {"start": 624.0799999999999, "end": 627.56, "text": " Did I mention they also offer persistent storage?"}, {"start": 627.56, "end": 635.76, "text": " So join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 635.76, "end": 638.12, "text": " workstations or servers."}, {"start": 638.12, "end": 645.16, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 645.16, "end": 646.16, "text": " today."}, {"start": 646.16, "end": 676.12, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HyOW6fmkgrc
Google’s Imagen AI: Outrageously Good! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Imagen: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding" is available here: https://gweb-research-imagen.appspot.com/ 🕊️ Follow us on Twitter for more DALL-E 2 and Imagen-related content: https://twitter.com/twominutepapers ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 0:00 Google Imagen 0:21 What is DALL-E 2? 1:04 Google Imagen enters the scene 1:26 Pandas and guitars 2:07 What is new here? 2:29 Finally, text! 2:52 Oh my, refraction too! 3:21 AI reacts to an other AI 3:45 Imagen VS DALL-E 2 5:08 More tests 5:35 So much progress in so little time 6:19 More results Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #google #imagen
Dear Fellow Scholars, this is two-minute papers with Dr. Kanoz-Jolnai-Fehid. I cannot tell you how excited I am by this paper. Wow! Today, you will see more incredible images generated by an AI. However, not from OpenAI, but from Google. Just a few months ago, OpenAI's Image Generator AI called Dolly II took the world by storm. You could name almost anything. Kat Napoleon, a teddy bear on a skateboard on Times Square, a basketball player, dunking as an explosion of a nebula, and it was able to create an appropriate image for it. However, there was one interesting thing about it. What do you think the prompt for this image must have been? Hmm, not easy, right? Well, it was a sign that says Deep Learning. Oh yes, this was one of the failure cases. Please remember this. Now, we always say that in research, do not look at where we are, always look at where we will be two more papers down the line. However, we didn't even make it two more papers down the line. We barely made two months down the line, and here it is. This is Google Research's incredible Image Generator AI image. This technique also looks at millions of images and text description pairs, and learns what people mean when they say this is a guitar, or this is a panda. But the magic happens here. Oh yes, it also learned how to combine these concepts together, and how a panda would play a guitar. The frets and the strings seem a little wacky, but what do we know? This is not human engineering. This is panda engineering. Or robot engineering, something for a panda. We are living crazy times indeed. So, what is different here? Why do we need another Image Generator AI? Well, let's pop the hood and look inside. Oh yes, two things that will immediately make a difference come to mind. One, this architecture is simpler. Two, it learns on longer text descriptions, and hopefully this also means that it can generate text better. Let's have a look. What are you seeing? What I am seeing? That is not just some text. That is a beautiful piece of text. Exactly what we were looking for. Absolutely amazing. And these are not the only advantages. You know, I am a light transport researcher by trade. I promise to myself that I'll try not to flip out. But hold on to your papers and... Holy mother of papers! Look at that! It can also generate beautiful refractive objects. That duck is truly a sight to behold. My goodness! Now I will note that Dolly too was also pretty good at this. And if we plug in DeepMind's new Flamingo Language model, would you look at that? Is this really happening? Yes, that is an AI commenting on a different AI's work. What a time to be alive! We will have a look at this paper too in the near future. Make sure to subscribe and hit the bell icon. You really don't want to miss it when it comes. And you know what? Let's test it some more August Open AI's amazing Dolly too. See how they stack up against each other. The first prompt will be a couple of glasses sitting on the table. Well, with Google's image on. Oh my! These ones are amazing. Once again, proper refractive objects. Loving it. And what about Dolly too? There is one with the glasses with an interesting framing. But I see both reflections and refractions. Apart from the framing, I am liking this. And the rest. Well, yes, those are glasses sitting on the table. But when we say a couple of glasses, we probably mean these and not these. But that's really interesting. Two AI's that have a linguistic battle here. Imagine showing this to someone just 10 years ago. See how they would react. Loving it. Also, I bet that in the future, these AI's will work like brands and products today, where people will have strong opinions as to which ones they prefer. The analog warmth of image in, or the three year warranty on Dolly 4. And wait, you are all experienced fellow scholars here. So, you also wish to see the two tested against each other a little more rigorously. And we'll do exactly that. The paper checks the new technique against previous results mathematically. Or we can ask a bunch of humans which one they prefer. And, wow, the new technique passes with flying colors on both. And once again, Dolly 2 appeared just about two months ago. And now, a new follow-up paper from Google that tests really well against it. This is not two more papers down the line. Not even two more years down the line. This is just two more months down the line. And, a year before, we had Dolly 1 and see how much of a difference open AI made in just a year. Now, I am almost certain that this paper has been in the works for a while and the added comparisons against Dolly 2 at the end. But still, a follow-up paper, this quickly, the piece of progress in AI research is absolutely incredible. Bravo, Google! What a time to be alive! So, does this get your mind going? What else would you use this new technique for? Let me know in the comments below. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. Get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And, hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Kanoz-Jolnai-Fehid."}, {"start": 4.64, "end": 9.120000000000001, "text": " I cannot tell you how excited I am by this paper."}, {"start": 9.120000000000001, "end": 10.120000000000001, "text": " Wow!"}, {"start": 10.120000000000001, "end": 15.92, "text": " Today, you will see more incredible images generated by an AI."}, {"start": 15.92, "end": 20.400000000000002, "text": " However, not from OpenAI, but from Google."}, {"start": 20.400000000000002, "end": 27.84, "text": " Just a few months ago, OpenAI's Image Generator AI called Dolly II took the world by storm."}, {"start": 27.84, "end": 30.56, "text": " You could name almost anything."}, {"start": 30.56, "end": 36.4, "text": " Kat Napoleon, a teddy bear on a skateboard on Times Square, a basketball player,"}, {"start": 36.4, "end": 42.8, "text": " dunking as an explosion of a nebula, and it was able to create an appropriate image for it."}, {"start": 42.8, "end": 46.64, "text": " However, there was one interesting thing about it."}, {"start": 46.64, "end": 50.4, "text": " What do you think the prompt for this image must have been?"}, {"start": 50.4, "end": 53.519999999999996, "text": " Hmm, not easy, right?"}, {"start": 53.52, "end": 58.24, "text": " Well, it was a sign that says Deep Learning."}, {"start": 58.24, "end": 61.68000000000001, "text": " Oh yes, this was one of the failure cases."}, {"start": 61.68000000000001, "end": 63.52, "text": " Please remember this."}, {"start": 63.52, "end": 68.0, "text": " Now, we always say that in research, do not look at where we are,"}, {"start": 68.0, "end": 72.08, "text": " always look at where we will be two more papers down the line."}, {"start": 72.08, "end": 76.48, "text": " However, we didn't even make it two more papers down the line."}, {"start": 76.48, "end": 80.80000000000001, "text": " We barely made two months down the line, and here it is."}, {"start": 80.8, "end": 86.72, "text": " This is Google Research's incredible Image Generator AI image."}, {"start": 86.72, "end": 91.84, "text": " This technique also looks at millions of images and text description pairs,"}, {"start": 91.84, "end": 99.28, "text": " and learns what people mean when they say this is a guitar, or this is a panda."}, {"start": 99.28, "end": 102.32, "text": " But the magic happens here."}, {"start": 102.32, "end": 107.67999999999999, "text": " Oh yes, it also learned how to combine these concepts together,"}, {"start": 107.68, "end": 110.80000000000001, "text": " and how a panda would play a guitar."}, {"start": 110.80000000000001, "end": 114.16000000000001, "text": " The frets and the strings seem a little wacky,"}, {"start": 114.16000000000001, "end": 115.60000000000001, "text": " but what do we know?"}, {"start": 115.60000000000001, "end": 117.84, "text": " This is not human engineering."}, {"start": 117.84, "end": 120.16000000000001, "text": " This is panda engineering."}, {"start": 120.16000000000001, "end": 124.88000000000001, "text": " Or robot engineering, something for a panda."}, {"start": 124.88000000000001, "end": 127.76, "text": " We are living crazy times indeed."}, {"start": 127.76, "end": 129.92000000000002, "text": " So, what is different here?"}, {"start": 129.92000000000002, "end": 133.12, "text": " Why do we need another Image Generator AI?"}, {"start": 133.12, "end": 137.28, "text": " Well, let's pop the hood and look inside."}, {"start": 137.28, "end": 142.8, "text": " Oh yes, two things that will immediately make a difference come to mind."}, {"start": 142.8, "end": 145.84, "text": " One, this architecture is simpler."}, {"start": 145.84, "end": 149.52, "text": " Two, it learns on longer text descriptions,"}, {"start": 149.52, "end": 155.04, "text": " and hopefully this also means that it can generate text better."}, {"start": 155.04, "end": 156.72, "text": " Let's have a look."}, {"start": 156.72, "end": 158.72, "text": " What are you seeing?"}, {"start": 158.72, "end": 160.32, "text": " What I am seeing?"}, {"start": 160.32, "end": 162.88, "text": " That is not just some text."}, {"start": 162.88, "end": 165.44, "text": " That is a beautiful piece of text."}, {"start": 165.44, "end": 167.6, "text": " Exactly what we were looking for."}, {"start": 167.6, "end": 169.52, "text": " Absolutely amazing."}, {"start": 169.52, "end": 172.07999999999998, "text": " And these are not the only advantages."}, {"start": 172.07999999999998, "end": 175.35999999999999, "text": " You know, I am a light transport researcher by trade."}, {"start": 175.35999999999999, "end": 179.92, "text": " I promise to myself that I'll try not to flip out."}, {"start": 179.92, "end": 183.6, "text": " But hold on to your papers and..."}, {"start": 183.6, "end": 186.16, "text": " Holy mother of papers!"}, {"start": 186.16, "end": 187.6, "text": " Look at that!"}, {"start": 187.6, "end": 191.84, "text": " It can also generate beautiful refractive objects."}, {"start": 191.84, "end": 195.12, "text": " That duck is truly a sight to behold."}, {"start": 195.12, "end": 196.4, "text": " My goodness!"}, {"start": 196.4, "end": 200.48000000000002, "text": " Now I will note that Dolly too was also pretty good at this."}, {"start": 201.52, "end": 206.72, "text": " And if we plug in DeepMind's new Flamingo Language model,"}, {"start": 206.72, "end": 208.24, "text": " would you look at that?"}, {"start": 208.24, "end": 209.52, "text": " Is this really happening?"}, {"start": 210.24, "end": 214.8, "text": " Yes, that is an AI commenting on a different AI's work."}, {"start": 215.6, "end": 217.36, "text": " What a time to be alive!"}, {"start": 217.36, "end": 220.72, "text": " We will have a look at this paper too in the near future."}, {"start": 220.72, "end": 223.84, "text": " Make sure to subscribe and hit the bell icon."}, {"start": 223.84, "end": 226.32, "text": " You really don't want to miss it when it comes."}, {"start": 226.32, "end": 227.92000000000002, "text": " And you know what?"}, {"start": 227.92000000000002, "end": 232.88, "text": " Let's test it some more August Open AI's amazing Dolly too."}, {"start": 232.88, "end": 235.12, "text": " See how they stack up against each other."}, {"start": 235.76, "end": 240.64000000000001, "text": " The first prompt will be a couple of glasses sitting on the table."}, {"start": 241.28, "end": 242.96, "text": " Well, with Google's image on."}, {"start": 243.6, "end": 244.64000000000001, "text": " Oh my!"}, {"start": 244.64000000000001, "end": 246.8, "text": " These ones are amazing."}, {"start": 246.8, "end": 250.16, "text": " Once again, proper refractive objects."}, {"start": 250.16, "end": 250.64000000000001, "text": " Loving it."}, {"start": 250.64, "end": 253.11999999999998, "text": " And what about Dolly too?"}, {"start": 253.67999999999998, "end": 256.96, "text": " There is one with the glasses with an interesting framing."}, {"start": 257.59999999999997, "end": 261.76, "text": " But I see both reflections and refractions."}, {"start": 262.32, "end": 264.96, "text": " Apart from the framing, I am liking this."}, {"start": 265.52, "end": 266.47999999999996, "text": " And the rest."}, {"start": 267.12, "end": 271.28, "text": " Well, yes, those are glasses sitting on the table."}, {"start": 271.84, "end": 277.36, "text": " But when we say a couple of glasses, we probably mean these and not these."}, {"start": 277.36, "end": 280.24, "text": " But that's really interesting."}, {"start": 280.24, "end": 283.92, "text": " Two AI's that have a linguistic battle here."}, {"start": 283.92, "end": 287.36, "text": " Imagine showing this to someone just 10 years ago."}, {"start": 288.0, "end": 289.36, "text": " See how they would react."}, {"start": 289.92, "end": 290.96000000000004, "text": " Loving it."}, {"start": 290.96000000000004, "end": 297.36, "text": " Also, I bet that in the future, these AI's will work like brands and products today,"}, {"start": 297.36, "end": 301.28000000000003, "text": " where people will have strong opinions as to which ones they prefer."}, {"start": 301.28, "end": 307.35999999999996, "text": " The analog warmth of image in, or the three year warranty on Dolly 4."}, {"start": 307.35999999999996, "end": 310.96, "text": " And wait, you are all experienced fellow scholars here."}, {"start": 310.96, "end": 317.11999999999995, "text": " So, you also wish to see the two tested against each other a little more rigorously."}, {"start": 317.11999999999995, "end": 319.35999999999996, "text": " And we'll do exactly that."}, {"start": 319.35999999999996, "end": 324.55999999999995, "text": " The paper checks the new technique against previous results mathematically."}, {"start": 325.28, "end": 329.44, "text": " Or we can ask a bunch of humans which one they prefer."}, {"start": 329.44, "end": 335.36, "text": " And, wow, the new technique passes with flying colors on both."}, {"start": 335.36, "end": 340.71999999999997, "text": " And once again, Dolly 2 appeared just about two months ago."}, {"start": 340.71999999999997, "end": 346.48, "text": " And now, a new follow-up paper from Google that tests really well against it."}, {"start": 346.48, "end": 349.28, "text": " This is not two more papers down the line."}, {"start": 349.28, "end": 352.32, "text": " Not even two more years down the line."}, {"start": 352.32, "end": 356.08, "text": " This is just two more months down the line."}, {"start": 356.08, "end": 364.8, "text": " And, a year before, we had Dolly 1 and see how much of a difference open AI made in just a year."}, {"start": 364.8, "end": 369.59999999999997, "text": " Now, I am almost certain that this paper has been in the works for a while"}, {"start": 369.59999999999997, "end": 373.68, "text": " and the added comparisons against Dolly 2 at the end."}, {"start": 373.68, "end": 381.84, "text": " But still, a follow-up paper, this quickly, the piece of progress in AI research is absolutely incredible."}, {"start": 381.84, "end": 383.28, "text": " Bravo, Google!"}, {"start": 383.28, "end": 385.28, "text": " What a time to be alive!"}, {"start": 385.28, "end": 387.76, "text": " So, does this get your mind going?"}, {"start": 387.76, "end": 390.88, "text": " What else would you use this new technique for?"}, {"start": 390.88, "end": 392.71999999999997, "text": " Let me know in the comments below."}, {"start": 392.71999999999997, "end": 396.79999999999995, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 396.79999999999995, "end": 403.76, "text": " If you are looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 403.76, "end": 412.4, "text": " Get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory."}, {"start": 412.4, "end": 421.28, "text": " And, hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 421.28, "end": 428.96, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 428.96, "end": 431.67999999999995, "text": " workstations, or servers."}, {"start": 431.67999999999995, "end": 440.15999999999997, "text": " Make sure to go to LambdaLabs.com, slash papers to sign up for one of their amazing GPU instances today."}, {"start": 440.16, "end": 446.72, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 446.72, "end": 476.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=D0vpgZKNEy0
Google’s New Robot: Your Personal Butler! 🤖
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances" is available here: https://say-can.github.io/ https://arxiv.org/abs/2204.01691 🕊️ Check us out on Twitter for more DALL-E 2 related content! https://twitter.com/twominutepapers ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail image: OpenAI DALL-E 2 Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #google #imagen
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jolai Fahir. Today, you are going to see how one area of research can unlock new capabilities on a seemingly completely different area. We are now living the advent of amazing AI-based language models. For instance, OpenAI's GPT-3 technique is capable of all kinds of wizardry, for instance, finishing our sentences or creating plots, spreadsheets, mathematical formulae, and many other things. While their Dolly2AI is capable of generating incredible quality images from a written description, even if they are too specific. Way too specific. Now, note that all this wizardry is possible as long as we are dealing with text and images. How about in knowing a real robot moving around in the real world with this kind of understanding of language? I wonder what that could do. Well, check this out, Sanario 1. This little robot uses GPT-3 and other language models, and it not only understands it, but it can also use this knowledge to help us. Don't believe it. Have a look. For instance, we can tell it that we spilled the coke and ask it how it could help us out, and it recommends finding the coke can, picking it up, going to the trash can, throwing it out, and bringing us a sponge. Yes, we will have to do the rest of it ourselves, but still. Wow! Good job, little robot. Now, the key here is that it not only understands what we are asking, propose how to help, but hold on to your papers, because here comes Sanario 2. Oh my, it also looks around, locates the most important objects required, and now it knows enough to make recommendations as to what to do. And, of course, not all of these make sense. Look at that. It can say that it is very sorry about the mess. Well, thank you for the emotional support, little robot, but we also need some physical help here. Oh yes, that's better. And now, look, it uses its eyes to look around. Yes, I have my hand. Well, does it work? Yes, it does. Great. Now, coke can, trash can, sponge. Hmm, it's time to make a cunning plan. Perfect. And it can also do plenty more. For instance, if we've been reading research papers all day and we feel a little tired, if we tell it, it can bring us a water bottle, hand it to us, and can even bring us an apple. Now, I'd love to see this. Be gentle. And, oh my, thank you so much. The amenities at Google HQ seem to have leveled up to the next level. And believe it or not, these were just really simple things. It can do way better. Look, this is a plan that requires planning 16 steps ahead, and it does not get stuck and doesn't mess up too badly anywhere. This one is as close to a personal butler as it gets. Absolutely incredible. These are finally real robots that can help us with real tasks in the real world. So cool. What a time to be alive. Now, this is truly an amazing paper, but make no mistake, not even this is perfect. For instance, the success rate for the planning is about 70% and it can properly execute the plans most of the time, but clearly not all the time. The longer term planning results may also need to be a bit cherry-picked to get a good one. It doesn't always succeed. Also, note that all this is played at 10x speed, so it takes a while. Clearly, typing the message and waiting for the robot still takes longer than just doing the work. However, this is an excellent opportunity for us to apply the first law of papers, which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. And if open AI's image generator AI, Dolly looked like this, and just a year, and a paper later, it looks like this. Well, just imagine what this will be able to do, just a couple more papers down the line. And what do you think? Does this get your mind going? If you have ideas for cool applications, let me know in the comments below. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text, or even generate it. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers, or click the link in the video description, and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jolai Fahir."}, {"start": 4.88, "end": 12.96, "text": " Today, you are going to see how one area of research can unlock new capabilities on a seemingly"}, {"start": 12.96, "end": 20.56, "text": " completely different area. We are now living the advent of amazing AI-based language models."}, {"start": 20.56, "end": 26.64, "text": " For instance, OpenAI's GPT-3 technique is capable of all kinds of wizardry,"}, {"start": 26.64, "end": 33.36, "text": " for instance, finishing our sentences or creating plots, spreadsheets, mathematical formulae,"}, {"start": 33.36, "end": 41.84, "text": " and many other things. While their Dolly2AI is capable of generating incredible quality images"}, {"start": 41.84, "end": 48.96, "text": " from a written description, even if they are too specific. Way too specific. Now,"}, {"start": 48.96, "end": 55.68, "text": " note that all this wizardry is possible as long as we are dealing with text and images."}, {"start": 55.68, "end": 63.68, "text": " How about in knowing a real robot moving around in the real world with this kind of understanding of language?"}, {"start": 63.68, "end": 72.64, "text": " I wonder what that could do. Well, check this out, Sanario 1. This little robot uses GPT-3"}, {"start": 72.64, "end": 80.32, "text": " and other language models, and it not only understands it, but it can also use this knowledge to help us."}, {"start": 80.32, "end": 87.19999999999999, "text": " Don't believe it. Have a look. For instance, we can tell it that we spilled the coke and ask it"}, {"start": 87.19999999999999, "end": 95.03999999999999, "text": " how it could help us out, and it recommends finding the coke can, picking it up, going to the trash can,"}, {"start": 95.03999999999999, "end": 103.11999999999999, "text": " throwing it out, and bringing us a sponge. Yes, we will have to do the rest of it ourselves, but still."}, {"start": 103.75999999999999, "end": 110.08, "text": " Wow! Good job, little robot. Now, the key here is that it not only understands what we are asking,"}, {"start": 110.08, "end": 116.48, "text": " propose how to help, but hold on to your papers, because here comes Sanario 2."}, {"start": 117.28, "end": 125.03999999999999, "text": " Oh my, it also looks around, locates the most important objects required, and now it knows"}, {"start": 125.03999999999999, "end": 132.64, "text": " enough to make recommendations as to what to do. And, of course, not all of these make sense."}, {"start": 132.64, "end": 136.8, "text": " Look at that. It can say that it is very sorry about the mess."}, {"start": 136.8, "end": 143.36, "text": " Well, thank you for the emotional support, little robot, but we also need some physical help here."}, {"start": 143.36, "end": 152.4, "text": " Oh yes, that's better. And now, look, it uses its eyes to look around. Yes, I have my hand."}, {"start": 152.4, "end": 162.64000000000001, "text": " Well, does it work? Yes, it does. Great. Now, coke can, trash can, sponge."}, {"start": 162.64, "end": 171.44, "text": " Hmm, it's time to make a cunning plan. Perfect. And it can also do plenty more."}, {"start": 171.44, "end": 177.83999999999997, "text": " For instance, if we've been reading research papers all day and we feel a little tired,"}, {"start": 177.83999999999997, "end": 184.88, "text": " if we tell it, it can bring us a water bottle, hand it to us, and can even bring us an apple."}, {"start": 184.88, "end": 194.64, "text": " Now, I'd love to see this. Be gentle. And, oh my, thank you so much. The amenities at Google HQ"}, {"start": 194.64, "end": 201.92, "text": " seem to have leveled up to the next level. And believe it or not, these were just really simple things."}, {"start": 201.92, "end": 211.04, "text": " It can do way better. Look, this is a plan that requires planning 16 steps ahead, and it does not"}, {"start": 211.04, "end": 218.95999999999998, "text": " get stuck and doesn't mess up too badly anywhere. This one is as close to a personal butler as it gets."}, {"start": 219.6, "end": 227.28, "text": " Absolutely incredible. These are finally real robots that can help us with real tasks in the real world."}, {"start": 227.84, "end": 235.84, "text": " So cool. What a time to be alive. Now, this is truly an amazing paper, but make no mistake,"}, {"start": 235.84, "end": 242.88, "text": " not even this is perfect. For instance, the success rate for the planning is about 70%"}, {"start": 242.88, "end": 249.28, "text": " and it can properly execute the plans most of the time, but clearly not all the time."}, {"start": 249.28, "end": 254.8, "text": " The longer term planning results may also need to be a bit cherry-picked to get a good one."}, {"start": 254.8, "end": 262.64, "text": " It doesn't always succeed. Also, note that all this is played at 10x speed, so it takes a while."}, {"start": 262.64, "end": 269.03999999999996, "text": " Clearly, typing the message and waiting for the robot still takes longer than just doing the work."}, {"start": 269.03999999999996, "end": 276.88, "text": " However, this is an excellent opportunity for us to apply the first law of papers, which says"}, {"start": 276.88, "end": 283.28, "text": " that research is a process. Do not look at where we are, look at where we will be, two more papers"}, {"start": 283.28, "end": 291.36, "text": " down the line. And if open AI's image generator AI, Dolly looked like this, and just a year,"}, {"start": 291.36, "end": 298.32, "text": " and a paper later, it looks like this. Well, just imagine what this will be able to do,"}, {"start": 298.32, "end": 304.48, "text": " just a couple more papers down the line. And what do you think? Does this get your mind going?"}, {"start": 304.48, "end": 308.72, "text": " If you have ideas for cool applications, let me know in the comments below."}, {"start": 308.72, "end": 315.6, "text": " This episode has been supported by CoHear AI. CoHear builds large language models and makes them"}, {"start": 315.6, "end": 321.52000000000004, "text": " available through an API so businesses can add advanced language understanding to their system"}, {"start": 321.52000000000004, "end": 328.88, "text": " or app quickly with just one line of code. You can use your own data, whether it's text from"}, {"start": 328.88, "end": 335.36, "text": " customer service requests, legal contracts, or social media posts to create your own custom"}, {"start": 335.36, "end": 342.72, "text": " models to understand text, or even generate it. For instance, it can be used to automatically"}, {"start": 342.72, "end": 349.28000000000003, "text": " determine whether your messages are about your business hours, returns, or shipping,"}, {"start": 349.28000000000003, "end": 355.52000000000004, "text": " or it can be used to generate a list of possible sentences you can use for your product descriptions."}, {"start": 356.08000000000004, "end": 362.24, "text": " Make sure to go to CoHear.ai slash papers, or click the link in the video description,"}, {"start": 362.24, "end": 367.6, "text": " and give it a try today. It's super easy to use. Thanks for watching and for your generous"}, {"start": 367.6, "end": 376.0, "text": " support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lbUluHiqwoA
OpenAI’s DALL-E 2: Even More Beautiful Results! 🤯
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://openai.com/dall-e-2/ 📝 Our Separable Subsurface Scattering paper with Activision-Blizzard: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ 📝Our earlier papers with the caustics: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/ https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ Try it out: https://www.craiyon.com (once again, note that this is an unofficial and reduced version. it also runs through gradio, which is pretty cool, check it out!) ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu 00:00 What is DALL-E 2? 01:40 DALL-E 1 vs DALL-E 2 02:15 1 - It can make videos 03:04 2 - Leonardo da Apple 03:19 3 - Robot learns a new language 03:28 4 - New AI-generated drinks! 04:12 5 - Toilet car 04:27 6 - Lightbulbs! 04:56 7 - Murals 05:11 8 - Darth Ant 05:23 9 - Text! 06:14 10 - Pro photography 06:28 Subsurface scattering! 07:10 Try it out yourself! 07:45 Changing the world Tweet sources: Fantasy novel: https://twitter.com/wenquai/status/1527312285152452608 Leonardo: https://twitter.com/nin_artificial/status/1524330744600055808/photo/1 Encyclopedia: https://twitter.com/giacaglia/status/1513271094215467008?s=21 Modernize: https://twitter.com/model_mechanic/status/1513588042145021953 Robot learning: https://twitter.com/AravSrinivas/status/1514217698447663109 Drinks: https://twitter.com/djbaskin/status/1514735924826963981 Toilet car: https://twitter.com/PaulYacoubian/status/1514955904659173387/photo/2 Lightbulb: https://twitter.com/mattgroh/status/1513837678172778498 Murals: https://twitter.com/_dschnurr/status/1516449112673071106/photo/1 Darth Ant: https://twitter.com/hardmaru/status/1519224830989684738 Sign: https://twitter.com/npew/status/1520050727770488833/photo/1 Hand: https://twitter.com/graycrawford/status/1521755209667555328 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai #dalle
And dear fellow scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today we are going to play some more with open AI's amazing technique, the only tool where we can write a text description and the AI creates an amazing image of exactly that. That sounds cool and it gets even cooler, the crazier the ideas we give to it. This is an AI endowed with a diffusion-based model which means that when we ask it something, it starts out from noise and over time it iteratively refines this image to match our description better. And over time an absolutely incredible image emerges. This is a neural network that is given a ton of images and a piece of text description that says what is in this image. That is one image caption pair. Dolly too is given millions and millions of these image pairs. So what can it do with all this knowledge? Well, the best part is that it can combine things. Oh yes, this is the key. This does not copy images from this training data, but it truly comes up with novel images. How? Well, after it had seen a bunch of images of koalas and separately a bunch of images of motorcycles, it starts to understand the concept of both and it will be able to combine the two together into a completely new image. And here you can see how much Dolly too has improved since Dolly won. It is on a completely different level from its first iteration. This is so much better. And once again, this is an AI that was improved this much in just a year, I can hardly believe what I am seeing here. What a time to be alive. And now that some time has passed, new interesting capabilities have emerged. Now, hold on to your papers and let's have a look at 10 more amazing examples. One, for instance, it can create not just images, but small videos. Videos? How? Well, look at this video where a Victorian house is getting modernized. Wow! So, what is going on here? Well, we can enter a text prompt, get an image, then change the text just a tiny bit, get a slightly changed image and repeat the process over and over until we get an amazing video like this. We can also run this process backwards and Victorianize a modern building as well. Two, if you don't believe that it can combine several existing concepts into something that definitely does not exist, hold on to your papers and have a look at this one. Oh yes, Apple products Leonardo da Vinci style. This is truly insane. If all this does not feel like human-like intelligence, I don't know what is. Three, here is how the AI imagines robot learning a new language. Four, it can create new kinds of drinks and my goodness. Well, I am super happy now. You are probably asking, Karoi, why are you super happy? Well, I am a light transport researcher by trade and I spend a great deal of my time computing caustics. Oh yes, caustics are these beautiful patterns that emerge when we hit a reflective or refractive object with light just the right way and these can look especially magical if we have an object with a complex geometry and the fact that the AI also understands this about our world, I am truly speechless. Beautiful. Five, it can also create new inventions and this is a toilet car. You know, some people are quite busy and if you have to go when you are on the go, well this one is for you. What a time to be alive. Six, in this one, the AI puts up a clinic in understanding combinations of concepts. Check this out. This is plants surrounding a light bulb. Now a light bulb surrounding some plants, now a plant with a light bulb inside and a light bulb with plants inside. It can follow these instructions really well on all of these cases. Seven, it can also create larger images too, so much so that entire murals can be requested. And here, not just the quality, but the variety of the results is truly a sight to behold. Eight, this is the famous Sith Lord, Darth Ant. Oh yes, this is Darth Vader, reimagined as a robot ant. I love it. Nine, if you remember previously, when we requested that it writes a piece of text on a sign, it floundered a great deal. This sign is supposed to say deep learning. And look, the amazing Peter Wellender found a way to make it write things on signs properly. And all this with an amazing death-of-field effect. Once again, this is an AI where a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming or technical knowledge. This is from engineering, if you will. Perhaps a new kind of job that is just coming into existence. And ten, now check this out. We can even give instructions to it as a photographer with instructed camera, request mammoth clouds, we marvel together at the simulation of those in an earlier video. And, oh my goodness, that cannot be true. Look, the hand has subsurface scattering. What is that? That is the effect of light penetrating the skin, bouncing around, and either coming out on the same or the other side. It has this absolutely beautiful halo effect. We worked on this a bit in an earlier paper together with Activision Blizzard, and it took a great deal of mathematics and physics to perform this efficiently in a computer graphics engine. And now the AI just knows what it is. I really don't know what to say. And, as always, plus one, because I couldn't resist. If you are interested in trying a reduced version of Dolly, check this out. The link is available in the video description. Once again, please note that this is a highly reduced version of Dolly, but it is still quite fun. Let the experiments begin. And I also cannot wait to get access to the full model some 2 minute papers mascot figures, and obviously images of wise scholars holding onto their papers must come into existence. I am sure this tool will democratize art creation by putting it into the hands of all of us. We all have so many ideas and so little time. And Dolly will help with exactly that. So cool! Just imagine having an AI artist that is, let's say, just half as good as a human artist, but the AI can paint 10,000 images a day for free. Cover art for your album, illustrations for your novel, not a problem. A little brain in a jar for everyone, if you will. And now it's a really good one. Bravo Open AI! I am starting to believe that the impact of this AI is going to be so strong, there will be the world as we know it before Dolly and the world after it. So, does this get your mind going? What else would you use this for? Let me know in the comments below. This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But, I am not looking for data, I am looking for insights. And, weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of Open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.me slash paper intro, or just click the link in the video description. And, try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support. And, I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " And dear fellow scholars, this is two-minute papers with Dr. Karo Zonai-Fehir."}, {"start": 4.72, "end": 10.24, "text": " Today we are going to play some more with open AI's amazing technique,"}, {"start": 10.24, "end": 17.76, "text": " the only tool where we can write a text description and the AI creates an amazing image of exactly that."}, {"start": 18.240000000000002, "end": 24.0, "text": " That sounds cool and it gets even cooler, the crazier the ideas we give to it."}, {"start": 24.0, "end": 30.72, "text": " This is an AI endowed with a diffusion-based model which means that when we ask it something,"}, {"start": 30.72, "end": 39.36, "text": " it starts out from noise and over time it iteratively refines this image to match our description better."}, {"start": 39.36, "end": 44.56, "text": " And over time an absolutely incredible image emerges."}, {"start": 44.56, "end": 51.84, "text": " This is a neural network that is given a ton of images and a piece of text description that says"}, {"start": 51.84, "end": 56.64, "text": " what is in this image. That is one image caption pair."}, {"start": 56.64, "end": 61.760000000000005, "text": " Dolly too is given millions and millions of these image pairs."}, {"start": 61.760000000000005, "end": 65.12, "text": " So what can it do with all this knowledge?"}, {"start": 65.12, "end": 69.44, "text": " Well, the best part is that it can combine things."}, {"start": 69.44, "end": 72.0, "text": " Oh yes, this is the key."}, {"start": 72.0, "end": 79.60000000000001, "text": " This does not copy images from this training data, but it truly comes up with novel images."}, {"start": 79.60000000000001, "end": 80.64, "text": " How?"}, {"start": 80.64, "end": 89.12, "text": " Well, after it had seen a bunch of images of koalas and separately a bunch of images of motorcycles,"}, {"start": 89.12, "end": 98.4, "text": " it starts to understand the concept of both and it will be able to combine the two together into a completely new image."}, {"start": 99.44, "end": 105.76, "text": " And here you can see how much Dolly too has improved since Dolly won."}, {"start": 105.76, "end": 110.24000000000001, "text": " It is on a completely different level from its first iteration."}, {"start": 110.24, "end": 112.47999999999999, "text": " This is so much better."}, {"start": 112.47999999999999, "end": 121.75999999999999, "text": " And once again, this is an AI that was improved this much in just a year, I can hardly believe what I am seeing here."}, {"start": 121.75999999999999, "end": 123.91999999999999, "text": " What a time to be alive."}, {"start": 123.91999999999999, "end": 130.32, "text": " And now that some time has passed, new interesting capabilities have emerged."}, {"start": 130.32, "end": 136.4, "text": " Now, hold on to your papers and let's have a look at 10 more amazing examples."}, {"start": 136.4, "end": 142.16, "text": " One, for instance, it can create not just images, but small videos."}, {"start": 142.88, "end": 143.44, "text": " Videos?"}, {"start": 144.08, "end": 144.88, "text": " How?"}, {"start": 144.88, "end": 149.76, "text": " Well, look at this video where a Victorian house is getting modernized."}, {"start": 150.48000000000002, "end": 150.96, "text": " Wow!"}, {"start": 151.52, "end": 153.28, "text": " So, what is going on here?"}, {"start": 153.84, "end": 161.52, "text": " Well, we can enter a text prompt, get an image, then change the text just a tiny bit,"}, {"start": 161.52, "end": 169.20000000000002, "text": " get a slightly changed image and repeat the process over and over until we get an amazing video"}, {"start": 169.20000000000002, "end": 176.48000000000002, "text": " like this. We can also run this process backwards and Victorianize a modern building as well."}, {"start": 177.12, "end": 183.12, "text": " Two, if you don't believe that it can combine several existing concepts into something that"}, {"start": 183.12, "end": 188.0, "text": " definitely does not exist, hold on to your papers and have a look at this one."}, {"start": 188.0, "end": 192.32, "text": " Oh yes, Apple products Leonardo da Vinci style."}, {"start": 192.88, "end": 194.64, "text": " This is truly insane."}, {"start": 195.2, "end": 199.52, "text": " If all this does not feel like human-like intelligence, I don't know what is."}, {"start": 200.4, "end": 206.16, "text": " Three, here is how the AI imagines robot learning a new language."}, {"start": 206.96, "end": 211.84, "text": " Four, it can create new kinds of drinks and my goodness."}, {"start": 212.32, "end": 214.72, "text": " Well, I am super happy now."}, {"start": 214.72, "end": 218.8, "text": " You are probably asking, Karoi, why are you super happy?"}, {"start": 218.8, "end": 225.84, "text": " Well, I am a light transport researcher by trade and I spend a great deal of my time computing"}, {"start": 225.84, "end": 226.64, "text": " caustics."}, {"start": 226.64, "end": 234.4, "text": " Oh yes, caustics are these beautiful patterns that emerge when we hit a reflective or refractive"}, {"start": 234.4, "end": 241.2, "text": " object with light just the right way and these can look especially magical if we have an object"}, {"start": 241.2, "end": 247.67999999999998, "text": " with a complex geometry and the fact that the AI also understands this about our world,"}, {"start": 248.23999999999998, "end": 250.32, "text": " I am truly speechless."}, {"start": 251.04, "end": 251.76, "text": " Beautiful."}, {"start": 252.79999999999998, "end": 257.91999999999996, "text": " Five, it can also create new inventions and this is a toilet car."}, {"start": 258.4, "end": 263.76, "text": " You know, some people are quite busy and if you have to go when you are on the go,"}, {"start": 263.76, "end": 265.44, "text": " well this one is for you."}, {"start": 266.0, "end": 267.44, "text": " What a time to be alive."}, {"start": 267.44, "end": 275.2, "text": " Six, in this one, the AI puts up a clinic in understanding combinations of concepts."}, {"start": 275.2, "end": 276.56, "text": " Check this out."}, {"start": 276.56, "end": 279.84, "text": " This is plants surrounding a light bulb."}, {"start": 279.84, "end": 283.36, "text": " Now a light bulb surrounding some plants,"}, {"start": 283.36, "end": 290.88, "text": " now a plant with a light bulb inside and a light bulb with plants inside."}, {"start": 290.88, "end": 295.44, "text": " It can follow these instructions really well on all of these cases."}, {"start": 295.44, "end": 303.76, "text": " Seven, it can also create larger images too, so much so that entire murals can be requested."}, {"start": 304.32, "end": 311.12, "text": " And here, not just the quality, but the variety of the results is truly a sight to behold."}, {"start": 311.92, "end": 315.6, "text": " Eight, this is the famous Sith Lord, Darth Ant."}, {"start": 316.48, "end": 321.36, "text": " Oh yes, this is Darth Vader, reimagined as a robot ant."}, {"start": 321.36, "end": 325.04, "text": " I love it."}, {"start": 325.04, "end": 330.08000000000004, "text": " Nine, if you remember previously, when we requested that it writes a piece of text on a sign,"}, {"start": 330.08000000000004, "end": 332.0, "text": " it floundered a great deal."}, {"start": 332.0, "end": 335.12, "text": " This sign is supposed to say deep learning."}, {"start": 335.12, "end": 342.96000000000004, "text": " And look, the amazing Peter Wellender found a way to make it write things on signs properly."}, {"start": 342.96000000000004, "end": 346.8, "text": " And all this with an amazing death-of-field effect."}, {"start": 346.8, "end": 352.56, "text": " Once again, this is an AI where a vast body of knowledge lies within,"}, {"start": 352.56, "end": 357.44, "text": " but it only emerges if we can bring it out with properly written prompts."}, {"start": 357.44, "end": 362.40000000000003, "text": " It almost feels like a new kind of programming that is open to everyone,"}, {"start": 362.40000000000003, "end": 366.72, "text": " even people without any programming or technical knowledge."}, {"start": 366.72, "end": 369.52, "text": " This is from engineering, if you will."}, {"start": 369.52, "end": 374.24, "text": " Perhaps a new kind of job that is just coming into existence."}, {"start": 374.24, "end": 377.2, "text": " And ten, now check this out."}, {"start": 377.2, "end": 382.72, "text": " We can even give instructions to it as a photographer with instructed camera,"}, {"start": 382.72, "end": 388.96000000000004, "text": " request mammoth clouds, we marvel together at the simulation of those in an earlier video."}, {"start": 388.96000000000004, "end": 394.08, "text": " And, oh my goodness, that cannot be true."}, {"start": 394.08, "end": 398.56, "text": " Look, the hand has subsurface scattering."}, {"start": 398.56, "end": 400.08, "text": " What is that?"}, {"start": 400.08, "end": 405.03999999999996, "text": " That is the effect of light penetrating the skin, bouncing around,"}, {"start": 405.03999999999996, "end": 409.68, "text": " and either coming out on the same or the other side."}, {"start": 409.68, "end": 413.28, "text": " It has this absolutely beautiful halo effect."}, {"start": 413.28, "end": 418.24, "text": " We worked on this a bit in an earlier paper together with Activision Blizzard,"}, {"start": 418.24, "end": 425.36, "text": " and it took a great deal of mathematics and physics to perform this efficiently in a computer graphics engine."}, {"start": 425.36, "end": 428.88, "text": " And now the AI just knows what it is."}, {"start": 428.88, "end": 430.8, "text": " I really don't know what to say."}, {"start": 430.8, "end": 435.68, "text": " And, as always, plus one, because I couldn't resist."}, {"start": 435.68, "end": 440.64, "text": " If you are interested in trying a reduced version of Dolly, check this out."}, {"start": 440.64, "end": 443.6, "text": " The link is available in the video description."}, {"start": 443.6, "end": 448.71999999999997, "text": " Once again, please note that this is a highly reduced version of Dolly,"}, {"start": 448.71999999999997, "end": 450.71999999999997, "text": " but it is still quite fun."}, {"start": 450.71999999999997, "end": 452.71999999999997, "text": " Let the experiments begin."}, {"start": 452.72, "end": 459.84000000000003, "text": " And I also cannot wait to get access to the full model some 2 minute papers mascot figures,"}, {"start": 459.84000000000003, "end": 466.96000000000004, "text": " and obviously images of wise scholars holding onto their papers must come into existence."}, {"start": 466.96000000000004, "end": 474.16, "text": " I am sure this tool will democratize art creation by putting it into the hands of all of us."}, {"start": 474.16, "end": 478.56, "text": " We all have so many ideas and so little time."}, {"start": 478.56, "end": 481.52000000000004, "text": " And Dolly will help with exactly that."}, {"start": 481.52, "end": 483.12, "text": " So cool!"}, {"start": 483.12, "end": 490.71999999999997, "text": " Just imagine having an AI artist that is, let's say, just half as good as a human artist,"}, {"start": 490.71999999999997, "end": 495.91999999999996, "text": " but the AI can paint 10,000 images a day for free."}, {"start": 495.91999999999996, "end": 501.12, "text": " Cover art for your album, illustrations for your novel, not a problem."}, {"start": 501.12, "end": 504.32, "text": " A little brain in a jar for everyone, if you will."}, {"start": 504.32, "end": 507.12, "text": " And now it's a really good one."}, {"start": 507.12, "end": 508.71999999999997, "text": " Bravo Open AI!"}, {"start": 508.72, "end": 514.8000000000001, "text": " I am starting to believe that the impact of this AI is going to be so strong,"}, {"start": 514.8000000000001, "end": 520.48, "text": " there will be the world as we know it before Dolly and the world after it."}, {"start": 520.48, "end": 522.88, "text": " So, does this get your mind going?"}, {"start": 522.88, "end": 525.12, "text": " What else would you use this for?"}, {"start": 525.12, "end": 527.12, "text": " Let me know in the comments below."}, {"start": 527.12, "end": 530.72, "text": " This video has been supported by weights and biases."}, {"start": 530.72, "end": 535.2, "text": " Being a machine learning researcher means doing tons of experiments"}, {"start": 535.2, "end": 538.32, "text": " and, of course, creating tons of data."}, {"start": 538.32, "end": 542.72, "text": " But, I am not looking for data, I am looking for insights."}, {"start": 542.72, "end": 546.32, "text": " And, weights and biases helps with exactly that."}, {"start": 546.32, "end": 550.72, "text": " They have tools for experiment tracking, data set and model versioning,"}, {"start": 550.72, "end": 553.9200000000001, "text": " and even hyper parameter optimization."}, {"start": 553.9200000000001, "end": 559.9200000000001, "text": " No wonder this is the experiment tracking tool choice of Open AI, Toyota Research,"}, {"start": 559.9200000000001, "end": 563.12, "text": " Samsung, and many more prestigious labs."}, {"start": 563.12, "end": 568.72, "text": " Make sure to use the link WNB.me slash paper intro,"}, {"start": 568.72, "end": 571.52, "text": " or just click the link in the video description."}, {"start": 571.52, "end": 575.92, "text": " And, try this 10 minute example of weights and biases today"}, {"start": 575.92, "end": 579.92, "text": " to experience the wonderful feeling of training a neural network"}, {"start": 579.92, "end": 583.12, "text": " and being in control of your experiments."}, {"start": 583.12, "end": 585.92, "text": " After you try it, you won't want to go back."}, {"start": 585.92, "end": 588.32, "text": " Thanks for watching and for your generous support."}, {"start": 588.32, "end": 596.32, "text": " And, I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MUDveGZIRaM
NVIDIA Renders Millions of Light Sources! 🔅
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting" is available here: https://research.nvidia.com/publication/2020-07_Spatiotemporal-reservoir-resampling 📝 Our earlier paper with the scene that took 3 weeks: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ The 2D light transport webapp: https://benedikt-bitterli.me/tantalum/tantalum.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Károly Zsolnai-Fehér. Today, we are going to see how NVIDIA's light transport algorithm can render this virtual amusement park that contains no less than 3.4 million light sources interactively. Yes, millions of light sources. How is this even possible? Well, to understand what is going on here, we need to talk about two concepts. If we wish to create a fully photorealistic scene in computer graphics, we reach out to a light transport simulation algorithm, and then this happens. Oh yes, concept number one. Noise. This is not photorealistic at all, not yet anyway. Why is that? Well, during this process, we have to shoot millions and millions of light rays into the scene to estimate how much light is bouncing around, and before we have simulated enough rays, the inaccuracies in our estimations show up as noise in these images. This clears up over time, but it may take from minutes to days for this to happen even for a smaller scene. For instance, this one took us three full weeks to finish. Weeks. Ouch. And in a moment, you will see that Nvidia's technique can render even bigger scenes than this, not in weeks, but in milliseconds. Yes, really, I am not kidding. And for now, concept number two. Bias and unbiased. There are different classes of light transport algorithms. Here you see an unbiased technique that doesn't cut any corners, and now look. Oh yes, this is a biased technique, and as you see, this one can give us a much cleaner image in the same amount of time. But the price to be paid for this is that some corners have been cut in the process. And with that, now hold onto your papers and have a look at Nvidia's technique. Oh yes, this was the scene with the 3.4 million light sources, and this method can really render not just an image, but an animation of it interactively. That is absolutely incredible. And it has even more to offer. Look, the light sources can even be dynamic, or in other words, they can move around as they please, and the algorithm still doesn't break a sweat. Oh my goodness, that is absolutely amazing. Now, let's see how it compares to a previous technique. Oh boy, see the noise levels here? This is so noisy, we don't even know what the scene should exactly look like. But now, when we have a look at this technique, oh yes, now we know for sure that there are some glossier reflections going on here. Even the unbiased version of the new technique is significantly better in this. However, look, if we are willing to let the simulator cut some corners, this is the biased version, and oh my goodness. Those difficult glossier reflections are so much cleaner. I love it. This amusement park scene contains a total of over 20 million triangles, and once again, the biased version of the new technique makes it truly a sight to behold. And this does not take from minutes to days to compute. Each of these images were produced in a matter of milliseconds. Wow. But it gets better. How? Well, the more detailed comparisons in the paper reveal that this method is 10 to 100 times faster than previous techniques, and it also maps really well onto our graphics cards. Okay, what is behind all this wizardry? How is this even possible? Well, the magic behind all this is a smarter allocation of these very samples that we have to shoot into the scene. For instance, this technique does not forget what we did just a moment ago when we moved the camera a little and advanced to the next image. Thus, lots of information that is otherwise thrown away can now be reused as we advanced the animation. Now note that there are so many papers out there on how to allocate these rays properly. This field is so mature, it truly is a challenge to create something that is just a few percentage points better than previous techniques. It is very hard to make even the tiniest difference, and to be able to create something that is 10 to 100 times better in this environment, that is insanity. And this property allocation has one more advantage. What is that? Well, have a look at this. Imagine that you are a good painter and you are given this image. Now your task is to finish it. Do you know what this depicts? Hmm, maybe. But knowing all the details of this image is out of question. Now, look, we don't have to live with these noisy images, we have the noisy algorithms tailored for light simulations. This one does some serious legwork with this noisy input, but even this one cannot possibly know exactly what is going on because there is so much information missing from the noisy input. And now, if you have been holding onto your paper so far, squeeze that paper because, look, this technique can produce this image in the same amount of time. Now we are talking. Now, let's give it to the denoising algorithm. And, yes, we get a much sharper, much more detailed output. Actually, let's compare it to the clean reference image. Oh, yes, this is so much closer. And the fact that just one more paper down the line, we could go from this to this absolutely blows my mind. So cool. But wait, we noted that the biased version of the technique cuts some corners. What is the price to be paid for this? What are we losing here? Well, some energy. What does that mean? It means that the image it produces is typically a little darker. There are other differences, but usually they are relatively minor making this an excellent trade off. Adding it all together, this will be an excellent tool in democratising light transport algorithms and putting it into the hands of everyone. And to think that years ago we had this, and now we can get this running on a commodity graphics card, I am truly out of words. The pace of progress in computer graphics research is nothing short of amazing. What a time to be alive. Now, if you also get excited by this, look here. This is not an implementation of this paper, but the first author also has a nice little web app where you can create interesting 2D light transport situations with lots of caustics. And the results are absolutely beautiful. The link is available in the video description. Check it out. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. Get this, they've recently launched an Nvidia RTX 8000 with 48GB of memory. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com, slash, papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.64, "end": 20.0, "text": " Today, we are going to see how NVIDIA's light transport algorithm can render this virtual amusement park that contains no less than 3.4 million light sources interactively."}, {"start": 20.0, "end": 25.2, "text": " Yes, millions of light sources. How is this even possible?"}, {"start": 25.2, "end": 41.2, "text": " Well, to understand what is going on here, we need to talk about two concepts. If we wish to create a fully photorealistic scene in computer graphics, we reach out to a light transport simulation algorithm, and then this happens."}, {"start": 41.2, "end": 45.2, "text": " Oh yes, concept number one. Noise."}, {"start": 45.2, "end": 70.2, "text": " This is not photorealistic at all, not yet anyway. Why is that? Well, during this process, we have to shoot millions and millions of light rays into the scene to estimate how much light is bouncing around, and before we have simulated enough rays, the inaccuracies in our estimations show up as noise in these images."}, {"start": 70.2, "end": 80.2, "text": " This clears up over time, but it may take from minutes to days for this to happen even for a smaller scene."}, {"start": 80.2, "end": 88.2, "text": " For instance, this one took us three full weeks to finish. Weeks. Ouch."}, {"start": 88.2, "end": 102.2, "text": " And in a moment, you will see that Nvidia's technique can render even bigger scenes than this, not in weeks, but in milliseconds. Yes, really, I am not kidding."}, {"start": 102.2, "end": 117.2, "text": " And for now, concept number two. Bias and unbiased. There are different classes of light transport algorithms. Here you see an unbiased technique that doesn't cut any corners, and now look."}, {"start": 117.2, "end": 127.2, "text": " Oh yes, this is a biased technique, and as you see, this one can give us a much cleaner image in the same amount of time."}, {"start": 127.2, "end": 139.2, "text": " But the price to be paid for this is that some corners have been cut in the process. And with that, now hold onto your papers and have a look at Nvidia's technique."}, {"start": 139.2, "end": 151.2, "text": " Oh yes, this was the scene with the 3.4 million light sources, and this method can really render not just an image, but an animation of it interactively."}, {"start": 151.2, "end": 157.2, "text": " That is absolutely incredible. And it has even more to offer."}, {"start": 157.2, "end": 169.2, "text": " Look, the light sources can even be dynamic, or in other words, they can move around as they please, and the algorithm still doesn't break a sweat."}, {"start": 169.2, "end": 177.2, "text": " Oh my goodness, that is absolutely amazing. Now, let's see how it compares to a previous technique."}, {"start": 177.2, "end": 195.2, "text": " Oh boy, see the noise levels here? This is so noisy, we don't even know what the scene should exactly look like. But now, when we have a look at this technique, oh yes, now we know for sure that there are some glossier reflections going on here."}, {"start": 195.2, "end": 216.2, "text": " Even the unbiased version of the new technique is significantly better in this. However, look, if we are willing to let the simulator cut some corners, this is the biased version, and oh my goodness. Those difficult glossier reflections are so much cleaner. I love it."}, {"start": 216.2, "end": 228.2, "text": " This amusement park scene contains a total of over 20 million triangles, and once again, the biased version of the new technique makes it truly a sight to behold."}, {"start": 228.2, "end": 240.2, "text": " And this does not take from minutes to days to compute. Each of these images were produced in a matter of milliseconds. Wow. But it gets better."}, {"start": 240.2, "end": 255.2, "text": " How? Well, the more detailed comparisons in the paper reveal that this method is 10 to 100 times faster than previous techniques, and it also maps really well onto our graphics cards."}, {"start": 255.2, "end": 279.2, "text": " Okay, what is behind all this wizardry? How is this even possible? Well, the magic behind all this is a smarter allocation of these very samples that we have to shoot into the scene. For instance, this technique does not forget what we did just a moment ago when we moved the camera a little and advanced to the next image."}, {"start": 279.2, "end": 293.2, "text": " Thus, lots of information that is otherwise thrown away can now be reused as we advanced the animation. Now note that there are so many papers out there on how to allocate these rays properly."}, {"start": 293.2, "end": 303.2, "text": " This field is so mature, it truly is a challenge to create something that is just a few percentage points better than previous techniques."}, {"start": 303.2, "end": 316.2, "text": " It is very hard to make even the tiniest difference, and to be able to create something that is 10 to 100 times better in this environment, that is insanity."}, {"start": 316.2, "end": 324.2, "text": " And this property allocation has one more advantage. What is that? Well, have a look at this."}, {"start": 324.2, "end": 340.2, "text": " Imagine that you are a good painter and you are given this image. Now your task is to finish it. Do you know what this depicts? Hmm, maybe. But knowing all the details of this image is out of question."}, {"start": 340.2, "end": 362.2, "text": " Now, look, we don't have to live with these noisy images, we have the noisy algorithms tailored for light simulations. This one does some serious legwork with this noisy input, but even this one cannot possibly know exactly what is going on because there is so much information missing from the noisy input."}, {"start": 362.2, "end": 385.2, "text": " And now, if you have been holding onto your paper so far, squeeze that paper because, look, this technique can produce this image in the same amount of time. Now we are talking. Now, let's give it to the denoising algorithm. And, yes, we get a much sharper, much more detailed output."}, {"start": 385.2, "end": 402.2, "text": " Actually, let's compare it to the clean reference image. Oh, yes, this is so much closer. And the fact that just one more paper down the line, we could go from this to this absolutely blows my mind. So cool."}, {"start": 402.2, "end": 412.2, "text": " But wait, we noted that the biased version of the technique cuts some corners. What is the price to be paid for this? What are we losing here?"}, {"start": 412.2, "end": 428.2, "text": " Well, some energy. What does that mean? It means that the image it produces is typically a little darker. There are other differences, but usually they are relatively minor making this an excellent trade off."}, {"start": 428.2, "end": 449.2, "text": " Adding it all together, this will be an excellent tool in democratising light transport algorithms and putting it into the hands of everyone. And to think that years ago we had this, and now we can get this running on a commodity graphics card, I am truly out of words."}, {"start": 449.2, "end": 474.2, "text": " The pace of progress in computer graphics research is nothing short of amazing. What a time to be alive. Now, if you also get excited by this, look here. This is not an implementation of this paper, but the first author also has a nice little web app where you can create interesting 2D light transport situations with lots of caustics."}, {"start": 474.2, "end": 481.2, "text": " And the results are absolutely beautiful. The link is available in the video description. Check it out."}, {"start": 481.2, "end": 492.2, "text": " This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 492.2, "end": 509.2, "text": " Get this, they've recently launched an Nvidia RTX 8000 with 48GB of memory. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 509.2, "end": 528.2, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com, slash, papers to sign up for one of their amazing GPU instances today."}, {"start": 528.2, "end": 539.2, "text": " Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=N-Pf9lCFi4E
Google’s New AI: Flying Through Virtual Worlds! 🕊️
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (thank you Soumik!): http://wandb.me/Mip-NeRF 📝 The paper "Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields" is available here: https://jonbarron.info/mipnerf360/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to take a collection of photos like these and magically create a video where we can fly through these photos. So, how is this even possible? Especially that the input is only a handful of photos. Well, typically we give it to a learning algorithm and ask it to synthesize a photorealistic video where we fly through the scene as we please. Of course, that sounds impassable. Especially that some information is given about the scene, but this is really not that much. And as you see, this is not impassable at all. Through the power of learning-based techniques, this previous AI is already capable of pulling off this amazing trick. And today I am going to show you that through this incredible new paper with a little expertise, something like this can be done even at home on our own machines. Why? Because now, research scientists at Google and Harvard also published their take on this problem. And they promised two fantastic improvements, improvement number one. Unbounded scenes. No more front-facing scene with a stationary camera. They say that now we can rotate the camera around the object and their technique will still work. And this is a huge deal, if it indeed works. We will see. You know what? Hold on to your papers and let's see together right now. Wow! This does not look like an AI-made video out of a bunch of photos. This looks like reality. My goodness! And, of course, you are experienced fellow scholars over there, so I bet you are immediately interested in the fidelity of the geometry which truly is a sight to behold. I am a light-transport researcher by trade, so I am additionally looking at the specular highlights. This is as close to reality as it gets. The depth map it produces which describe the distance of these objects from the camera, and they are also silky smooth. Outstanding! So, is this a one-off result or is this a robust technique that works on a variety of scenes? Well, I bet you know the answer by now. But, wait, we talked about two promises. Promise number one was the unbounded scenes with the moving camera. Who are these? Promise number two. Well, promise number two is free anti-aliasing. Oh boy, this is a technique from computer graphics that helps us overcome these jagged edges that are usually present in lower resolution images. And it works so well. Check out this comparison against a previous work by the name MIPNURF. And it is just so much better. The new method truly seems to be in a league of its own. We get smoother lines and pretty much every material and every piece of geometry comes out better. And note that this previous method is not some ancient technique. No, no. MIPNURF is a technique from just a year ago. Bravo Google and Bravo Harvard. And remember, all this comes from just a quick camera scan. And the rest of the details is learned and filled in by an AI. And this technique now truly has a remarkable understanding of the world around us. So, just using a commodity camera, walking around the scene and creating a digital video game version of it, absolutely incredible. Sign me up right now. So, what do you think? What else could this be useful for? What do you expect to happen a couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. What you see here is a report of this exact paper we have talked about which was made by Wates and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wates and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wates and Biasis is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description. And you can get a free demo today. Our thanks to Wates and Biasis for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 12.4, "text": " Today we are going to take a collection of photos like these and magically create a video"}, {"start": 12.4, "end": 16.0, "text": " where we can fly through these photos."}, {"start": 16.0, "end": 19.0, "text": " So, how is this even possible?"}, {"start": 19.0, "end": 24.0, "text": " Especially that the input is only a handful of photos."}, {"start": 24.0, "end": 31.0, "text": " Well, typically we give it to a learning algorithm and ask it to synthesize a photorealistic"}, {"start": 31.0, "end": 35.0, "text": " video where we fly through the scene as we please."}, {"start": 35.0, "end": 38.0, "text": " Of course, that sounds impassable."}, {"start": 38.0, "end": 45.0, "text": " Especially that some information is given about the scene, but this is really not that much."}, {"start": 45.0, "end": 49.0, "text": " And as you see, this is not impassable at all."}, {"start": 49.0, "end": 55.64, "text": " Through the power of learning-based techniques, this previous AI is already capable of pulling"}, {"start": 55.64, "end": 58.0, "text": " off this amazing trick."}, {"start": 58.0, "end": 66.0, "text": " And today I am going to show you that through this incredible new paper with a little expertise,"}, {"start": 66.0, "end": 71.0, "text": " something like this can be done even at home on our own machines."}, {"start": 71.0, "end": 72.0, "text": " Why?"}, {"start": 72.0, "end": 79.0, "text": " Because now, research scientists at Google and Harvard also published their take on this"}, {"start": 79.0, "end": 80.0, "text": " problem."}, {"start": 80.0, "end": 86.0, "text": " And they promised two fantastic improvements, improvement number one."}, {"start": 86.0, "end": 87.0, "text": " Unbounded scenes."}, {"start": 87.0, "end": 91.0, "text": " No more front-facing scene with a stationary camera."}, {"start": 91.0, "end": 98.0, "text": " They say that now we can rotate the camera around the object and their technique will still"}, {"start": 98.0, "end": 99.0, "text": " work."}, {"start": 99.0, "end": 103.0, "text": " And this is a huge deal, if it indeed works."}, {"start": 103.0, "end": 104.0, "text": " We will see."}, {"start": 104.0, "end": 105.0, "text": " You know what?"}, {"start": 105.0, "end": 109.0, "text": " Hold on to your papers and let's see together right now."}, {"start": 109.0, "end": 110.0, "text": " Wow!"}, {"start": 110.0, "end": 116.0, "text": " This does not look like an AI-made video out of a bunch of photos."}, {"start": 116.0, "end": 118.0, "text": " This looks like reality."}, {"start": 118.0, "end": 120.0, "text": " My goodness!"}, {"start": 120.0, "end": 126.0, "text": " And, of course, you are experienced fellow scholars over there, so I bet you are immediately"}, {"start": 126.0, "end": 132.0, "text": " interested in the fidelity of the geometry which truly is a sight to behold."}, {"start": 132.0, "end": 139.0, "text": " I am a light-transport researcher by trade, so I am additionally looking at the specular"}, {"start": 139.0, "end": 140.0, "text": " highlights."}, {"start": 140.0, "end": 143.0, "text": " This is as close to reality as it gets."}, {"start": 143.0, "end": 149.0, "text": " The depth map it produces which describe the distance of these objects from the camera,"}, {"start": 149.0, "end": 153.0, "text": " and they are also silky smooth."}, {"start": 153.0, "end": 154.0, "text": " Outstanding!"}, {"start": 154.0, "end": 161.0, "text": " So, is this a one-off result or is this a robust technique that works on a variety of"}, {"start": 161.0, "end": 162.0, "text": " scenes?"}, {"start": 162.0, "end": 165.0, "text": " Well, I bet you know the answer by now."}, {"start": 165.0, "end": 169.0, "text": " But, wait, we talked about two promises."}, {"start": 169.0, "end": 174.0, "text": " Promise number one was the unbounded scenes with the moving camera."}, {"start": 174.0, "end": 175.0, "text": " Who are these?"}, {"start": 175.0, "end": 177.0, "text": " Promise number two."}, {"start": 177.0, "end": 181.0, "text": " Well, promise number two is free anti-aliasing."}, {"start": 181.0, "end": 189.0, "text": " Oh boy, this is a technique from computer graphics that helps us overcome these jagged edges"}, {"start": 189.0, "end": 193.0, "text": " that are usually present in lower resolution images."}, {"start": 193.0, "end": 195.0, "text": " And it works so well."}, {"start": 195.0, "end": 201.0, "text": " Check out this comparison against a previous work by the name MIPNURF."}, {"start": 201.0, "end": 203.0, "text": " And it is just so much better."}, {"start": 203.0, "end": 208.0, "text": " The new method truly seems to be in a league of its own."}, {"start": 208.0, "end": 215.0, "text": " We get smoother lines and pretty much every material and every piece of geometry comes out"}, {"start": 215.0, "end": 216.0, "text": " better."}, {"start": 216.0, "end": 220.0, "text": " And note that this previous method is not some ancient technique."}, {"start": 220.0, "end": 222.0, "text": " No, no."}, {"start": 222.0, "end": 226.0, "text": " MIPNURF is a technique from just a year ago."}, {"start": 226.0, "end": 229.0, "text": " Bravo Google and Bravo Harvard."}, {"start": 229.0, "end": 234.0, "text": " And remember, all this comes from just a quick camera scan."}, {"start": 234.0, "end": 239.0, "text": " And the rest of the details is learned and filled in by an AI."}, {"start": 239.0, "end": 245.0, "text": " And this technique now truly has a remarkable understanding of the world around us."}, {"start": 245.0, "end": 252.0, "text": " So, just using a commodity camera, walking around the scene and creating a digital video game"}, {"start": 252.0, "end": 255.0, "text": " version of it, absolutely incredible."}, {"start": 255.0, "end": 257.0, "text": " Sign me up right now."}, {"start": 257.0, "end": 259.0, "text": " So, what do you think?"}, {"start": 259.0, "end": 261.0, "text": " What else could this be useful for?"}, {"start": 261.0, "end": 265.0, "text": " What do you expect to happen a couple more papers down the line?"}, {"start": 265.0, "end": 267.0, "text": " Please let me know in the comments below."}, {"start": 267.0, "end": 269.0, "text": " I'd love to hear your thoughts."}, {"start": 269.0, "end": 274.0, "text": " What you see here is a report of this exact paper we have talked about which was made by"}, {"start": 274.0, "end": 275.0, "text": " Wates and Biasis."}, {"start": 275.0, "end": 277.0, "text": " I put a link to it in the description."}, {"start": 277.0, "end": 279.0, "text": " Make sure to have a look."}, {"start": 279.0, "end": 282.0, "text": " I think it helps you understand this paper better."}, {"start": 282.0, "end": 287.0, "text": " Wates and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 287.0, "end": 293.0, "text": " Using their system, you can create beautiful reports like this one to explain your findings"}, {"start": 293.0, "end": 295.0, "text": " to your colleagues better."}, {"start": 295.0, "end": 301.0, "text": " It is used by many prestigious labs including OpenAI, Toyota Research, GitHub, and more."}, {"start": 301.0, "end": 309.0, "text": " And the best part is that Wates and Biasis is free for all individuals, academics, and open source projects."}, {"start": 309.0, "end": 316.0, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description."}, {"start": 316.0, "end": 319.0, "text": " And you can get a free demo today."}, {"start": 319.0, "end": 326.0, "text": " Our thanks to Wates and Biasis for their long standing support and for helping us make better videos for you."}, {"start": 326.0, "end": 355.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ayuEnJmwocE
This AI Makes You A Virtual Stuntman! 💪
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers A report of theirs is available here: http://wandb.me/human-pose-estimation 📝 The paper "Human Dynamics from Monocular Video with Dynamic Camera Movements" is available here: https://mrl.snu.ac.kr/research/ProjectMovingCam/MovingCam.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we will try to create a copy of ourselves and place it into a virtual world. Now this will be quite a challenge. Normally, to do this, we have to ask an artist to create a digital copy of us which takes a lot of time and effort. But there may be a way out. Look, with this earlier AIB technique, we can take a piece of target geometry and have an algorithm try to rebuild it to be used within a virtual world. The process is truly a sight to behold. Look at how beautifully it sculpts this piece of geometry until it looks like our target shape. This is wonderful. But wait a second, if we wish to create a copy of ourselves, we probably want it to move around too. This is, however, a stationary piece of geometry. No movement is allowed here. So, what do we do? What about movement? Well, have a look at this new technique, we're getting a piece of geometry with movement cannot get any simpler than this. Just do your thing, record it with a camera and give it to the AI. And I have to say, I am a little skeptical. Look, this is what a previous technique could get us. This is not too close to what we are looking for. So, let's see what the new method can do with this data. And, uh oh, this is not great. So, is this it? Is the geometry cloning dream dead? Well, don't despair quite yet. This issue happens because our starting position and orientation is not known to the algorithm, but it can be remedied. How? Well, by adding additional data to the AI to learn from. And now, hold on to your papers and let's see what it can do now. And, oh my goodness, are you seeing what I am seeing? Our movement is now are replicated in a virtual world almost perfectly. Look at that beautiful animation. Absolutely incredible. And, if even this is not good enough, look at this result too. So good. Loving it. And believe it or not, it has even more coolness up the sleeve. If you have been holding on to your papers so far, now squeeze that paper because here comes my favorite part of this work. And that is a step that the authors call scene-fitting. What is that? Essentially, what happens is that the AI re-imagines us as a video game character and sees our movement, but does not have an idea as to what our surroundings look like. What it does is that from this video data, it tries to reconstruct our environment, essentially, recreating it as a video game level. And that is quite a challenge. Look, at first, it is not close at all. But, over time, it learns what the first obstacle should look like, but still, the rest of the level, not so much. Can this still be improved? Let's have a look together as we give it some more time and our character a few more concussions, it starts to get a better feel of the level. And it really works for a variety of difficult dynamic motion types. Cartwheel, backflips, parkour jumps, dance moves, you name it. It is a robust technique that can do it all. So cool. And note that the authors of the paper gave us not just the blueprints for the technique in the form of a research paper, but they also provide the source code of this technique to all of us free of charge. Thank you so much. I am sure this will be a huge help in democratizing the creation of video games and all kinds of virtual characters. And if we add up all of these together, we get this. This truly is a sight to behold. Look, so much improvement, just one more paper down the line. And just imagine what we will be able to do a couple more papers down the line. Well, what do you think? Let me know in the comments below. This video has been supported by Wates and Biases. Check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wmb.me slash papers or just click the link in the video description. Our thanks to Wates and Biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 12.48, "text": " Today, we will try to create a copy of ourselves and place it into a virtual world."}, {"start": 12.48, "end": 19.68, "text": " Now this will be quite a challenge. Normally, to do this, we have to ask an artist to create a"}, {"start": 19.68, "end": 27.6, "text": " digital copy of us which takes a lot of time and effort. But there may be a way out."}, {"start": 27.6, "end": 35.760000000000005, "text": " Look, with this earlier AIB technique, we can take a piece of target geometry and have an algorithm"}, {"start": 35.760000000000005, "end": 43.52, "text": " try to rebuild it to be used within a virtual world. The process is truly a sight to behold."}, {"start": 43.52, "end": 50.8, "text": " Look at how beautifully it sculpts this piece of geometry until it looks like our target shape."}, {"start": 50.8, "end": 60.0, "text": " This is wonderful. But wait a second, if we wish to create a copy of ourselves, we probably want it"}, {"start": 60.0, "end": 68.08, "text": " to move around too. This is, however, a stationary piece of geometry. No movement is allowed here."}, {"start": 68.08, "end": 75.44, "text": " So, what do we do? What about movement? Well, have a look at this new technique,"}, {"start": 75.44, "end": 81.75999999999999, "text": " we're getting a piece of geometry with movement cannot get any simpler than this."}, {"start": 82.32, "end": 87.44, "text": " Just do your thing, record it with a camera and give it to the AI."}, {"start": 88.32, "end": 96.0, "text": " And I have to say, I am a little skeptical. Look, this is what a previous technique could get us."}, {"start": 96.64, "end": 103.2, "text": " This is not too close to what we are looking for. So, let's see what the new method can do with"}, {"start": 103.2, "end": 113.52000000000001, "text": " this data. And, uh oh, this is not great. So, is this it? Is the geometry cloning dream dead?"}, {"start": 114.24000000000001, "end": 121.52000000000001, "text": " Well, don't despair quite yet. This issue happens because our starting position and orientation"}, {"start": 121.52000000000001, "end": 130.24, "text": " is not known to the algorithm, but it can be remedied. How? Well, by adding additional data"}, {"start": 130.24, "end": 137.20000000000002, "text": " to the AI to learn from. And now, hold on to your papers and let's see what it can do now."}, {"start": 137.92000000000002, "end": 144.88, "text": " And, oh my goodness, are you seeing what I am seeing? Our movement is now"}, {"start": 144.88, "end": 151.12, "text": " are replicated in a virtual world almost perfectly. Look at that beautiful animation."}, {"start": 151.12, "end": 162.0, "text": " Absolutely incredible. And, if even this is not good enough, look at this result too. So good."}, {"start": 162.0, "end": 169.12, "text": " Loving it. And believe it or not, it has even more coolness up the sleeve. If you have been"}, {"start": 169.12, "end": 176.32, "text": " holding on to your papers so far, now squeeze that paper because here comes my favorite part of"}, {"start": 176.32, "end": 184.56, "text": " this work. And that is a step that the authors call scene-fitting. What is that? Essentially,"}, {"start": 184.56, "end": 191.84, "text": " what happens is that the AI re-imagines us as a video game character and sees our movement,"}, {"start": 191.84, "end": 199.2, "text": " but does not have an idea as to what our surroundings look like. What it does is that from this video"}, {"start": 199.2, "end": 206.95999999999998, "text": " data, it tries to reconstruct our environment, essentially, recreating it as a video game level."}, {"start": 207.6, "end": 215.67999999999998, "text": " And that is quite a challenge. Look, at first, it is not close at all. But, over time,"}, {"start": 215.67999999999998, "end": 222.39999999999998, "text": " it learns what the first obstacle should look like, but still, the rest of the level, not so much."}, {"start": 222.4, "end": 229.76000000000002, "text": " Can this still be improved? Let's have a look together as we give it some more time and our"}, {"start": 229.76000000000002, "end": 237.20000000000002, "text": " character a few more concussions, it starts to get a better feel of the level. And it really works"}, {"start": 237.20000000000002, "end": 245.92000000000002, "text": " for a variety of difficult dynamic motion types. Cartwheel, backflips, parkour jumps, dance moves,"}, {"start": 245.92, "end": 253.92, "text": " you name it. It is a robust technique that can do it all. So cool. And note that the authors"}, {"start": 253.92, "end": 260.15999999999997, "text": " of the paper gave us not just the blueprints for the technique in the form of a research paper,"}, {"start": 260.15999999999997, "end": 267.2, "text": " but they also provide the source code of this technique to all of us free of charge. Thank you"}, {"start": 267.2, "end": 275.2, "text": " so much. I am sure this will be a huge help in democratizing the creation of video games and"}, {"start": 275.2, "end": 281.84, "text": " all kinds of virtual characters. And if we add up all of these together, we get this."}, {"start": 282.64, "end": 290.96, "text": " This truly is a sight to behold. Look, so much improvement, just one more paper down the line."}, {"start": 290.96, "end": 296.4, "text": " And just imagine what we will be able to do a couple more papers down the line."}, {"start": 297.28, "end": 302.71999999999997, "text": " Well, what do you think? Let me know in the comments below. This video has been supported by"}, {"start": 302.72, "end": 308.72, "text": " Wates and Biases. Check out the recent offering, fully connected, a place where they bring"}, {"start": 308.72, "end": 315.68, "text": " machine learning practitioners together to share and discuss their ideas, learn from industry leaders,"}, {"start": 315.68, "end": 322.0, "text": " and even collaborate on projects together. You see, I get messages from you fellow scholars telling"}, {"start": 322.0, "end": 329.68, "text": " me that you have been inspired by the series, but don't really know where to start. And here it is."}, {"start": 329.68, "end": 335.44, "text": " Fully connected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 335.44, "end": 340.8, "text": " get your papers accepted to a conference, and more. Make sure to visit them through"}, {"start": 340.8, "end": 348.48, "text": " wmb.me slash papers or just click the link in the video description. Our thanks to Wates and Biases"}, {"start": 348.48, "end": 353.2, "text": " for their longstanding support and for helping us make better videos for you."}, {"start": 353.2, "end": 363.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=aPiHhJjN3hI
DeepMind’s New AI Thinks It Is A Genius! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "DeepMind Gopher - Scaling Language Models: Methods, Analysis & Insights from Training Gopher" is available here: https://arxiv.org/abs/2112.11446 https://deepmind.com/blog/article/language-modelling-at-scale ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir. Today, we are going to see what DeepMind's AI is able to do after being unleashed on the Internet and reading no less than two trillion words. And amusingly, it also thinks that it is a genius. So is it? Well, we are going to find out together today. I am really curious about that, especially given how powerful these recent AI language models are. For instance, open AI's GPT3 language model AI can now write poems and even continue your stories. And even better, these stories can change direction and the AI can still pick them up and finish them. Recipes work too. So while open AI is writing these outstanding papers, I wonder what scientists at DeepMind are up to these days. Well, check this out. They have unleashed their AI that they call Goffhr on the Internet and asked it to read as much as it can. That is two trillion words. My goodness, that is a ton of text. What did it learn from it? Oh boy, a great deal. But mostly this one can answer questions. Hmm, questions? There are plenty of AI's around that can answer questions. Some can even solve a math exam straight from MIT. So why is this so interesting? Well, while humans are typically experts at one thing or very few things, this AI is nearly an expert at almost everything. Let's see what it can do together. For instance, we can ask a bunch of questions about biology and it will not only be quite insightful, but it also remembers what we were discussing a few questions ago. That is not trivial at all. So cool. Now note that not all of its answers are completely correct. We will have a more detailed look at that in a moment. Also, what I absolutely loved seeing when reading the paper is that we can even ask what it is thinking. And look, it expresses that it wishes to play on its smartphone. Very human-like. Now, make no mistake. This does not mean that the AI is thinking like a human is thinking. At the risk of simplifying it, this is more like a statistical combination of things that it had learned that people say on the internet when asked what they are thinking. Now note that many of these new works are so difficult to evaluate because they typically do better on some topics than previous ones and worse on others. The comparison of these techniques can easily become a bit subjective depending on what we are looking for. However, not here. Pull down to your papers and have a look at this. Oh wow, my goodness. Are you seeing what I am seeing? This is OpenAI's GPT3 and this is Goffhr. As you see, it is a great leap forward, not just here and there, but in many categories at the same time. Also, GPT3 used 175 billion parameters to train its neural network. Goffhr uses 280 billion parameters and as you see, we get plenty of value for these additional parameters. So, what does all this mean? This means that as these neural networks get bigger and bigger, they are still getting better. We are steadily closing in on the human level experts in many areas at the same time and progress is still not plateauing. It still has more left in the tank. How much more? We don't know yet, but as you see, the pace of improvement in AI research is absolutely incredible. However, we are still not there yet. We have a lot of knowledge in the area of humanities, social sciences and medicine is fantastic, but at mathematics of all things, not so much. You will see about that in a moment. And if you have been holding onto your paper so far, now squeeze that paper because would you look at that? What is it that I am seeing here? Oh boy, it thinks that it is a genius. Well, is it? Let's ask some challenging questions about Einstein's field equations, black holes and more, and find out. Hmm, well, it has a few things going for it. For instance, it has a great deal of factual knowledge. However, it can also get quite confused by very simple questions. Do geniuses mess up this multiplication? I should hope not. Also, have a look at this. We noted that it is not much of a math wizard. When asked these questions, it gives us an answer, and when we ask, are you sure about that? It says it is very confident. But, it is confidently incorrect, I am afraid, because none of these answers are correct. So, a genius AI, well, not quite yet. Human level intelligence, also not yet. But this is an incredible step forward, just one more paper down the line. And just imagine what we will be able to do just a couple more papers down the line. What do you think? Does this get your mind going? Let me know your ideas in the comments below. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. Like this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Azure. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir."}, {"start": 4.64, "end": 11.52, "text": " Today, we are going to see what DeepMind's AI is able to do after being unleashed on the"}, {"start": 11.52, "end": 17.12, "text": " Internet and reading no less than two trillion words."}, {"start": 17.12, "end": 21.84, "text": " And amusingly, it also thinks that it is a genius."}, {"start": 21.84, "end": 23.56, "text": " So is it?"}, {"start": 23.56, "end": 27.04, "text": " Well, we are going to find out together today."}, {"start": 27.04, "end": 33.4, "text": " I am really curious about that, especially given how powerful these recent AI language"}, {"start": 33.4, "end": 34.8, "text": " models are."}, {"start": 34.8, "end": 44.04, "text": " For instance, open AI's GPT3 language model AI can now write poems and even continue"}, {"start": 44.04, "end": 45.56, "text": " your stories."}, {"start": 45.56, "end": 51.72, "text": " And even better, these stories can change direction and the AI can still pick them up"}, {"start": 51.72, "end": 53.76, "text": " and finish them."}, {"start": 53.76, "end": 56.08, "text": " Recipes work too."}, {"start": 56.08, "end": 62.96, "text": " So while open AI is writing these outstanding papers, I wonder what scientists at DeepMind"}, {"start": 62.96, "end": 64.92, "text": " are up to these days."}, {"start": 64.92, "end": 67.08, "text": " Well, check this out."}, {"start": 67.08, "end": 73.72, "text": " They have unleashed their AI that they call Goffhr on the Internet and asked it to read"}, {"start": 73.72, "end": 75.96, "text": " as much as it can."}, {"start": 75.96, "end": 78.48, "text": " That is two trillion words."}, {"start": 78.48, "end": 82.36, "text": " My goodness, that is a ton of text."}, {"start": 82.36, "end": 83.84, "text": " What did it learn from it?"}, {"start": 83.84, "end": 86.24000000000001, "text": " Oh boy, a great deal."}, {"start": 86.24000000000001, "end": 90.56, "text": " But mostly this one can answer questions."}, {"start": 90.56, "end": 92.48, "text": " Hmm, questions?"}, {"start": 92.48, "end": 97.16, "text": " There are plenty of AI's around that can answer questions."}, {"start": 97.16, "end": 102.24000000000001, "text": " Some can even solve a math exam straight from MIT."}, {"start": 102.24000000000001, "end": 104.84, "text": " So why is this so interesting?"}, {"start": 104.84, "end": 111.60000000000001, "text": " Well, while humans are typically experts at one thing or very few things, this AI is"}, {"start": 111.6, "end": 115.19999999999999, "text": " nearly an expert at almost everything."}, {"start": 115.19999999999999, "end": 117.47999999999999, "text": " Let's see what it can do together."}, {"start": 117.47999999999999, "end": 123.64, "text": " For instance, we can ask a bunch of questions about biology and it will not only be quite"}, {"start": 123.64, "end": 130.16, "text": " insightful, but it also remembers what we were discussing a few questions ago."}, {"start": 130.16, "end": 132.76, "text": " That is not trivial at all."}, {"start": 132.76, "end": 133.76, "text": " So cool."}, {"start": 133.76, "end": 137.68, "text": " Now note that not all of its answers are completely correct."}, {"start": 137.68, "end": 141.16, "text": " We will have a more detailed look at that in a moment."}, {"start": 141.16, "end": 147.88, "text": " Also, what I absolutely loved seeing when reading the paper is that we can even ask what"}, {"start": 147.88, "end": 149.68, "text": " it is thinking."}, {"start": 149.68, "end": 155.12, "text": " And look, it expresses that it wishes to play on its smartphone."}, {"start": 155.12, "end": 156.48, "text": " Very human-like."}, {"start": 156.48, "end": 159.04, "text": " Now, make no mistake."}, {"start": 159.04, "end": 164.04, "text": " This does not mean that the AI is thinking like a human is thinking."}, {"start": 164.04, "end": 169.8, "text": " At the risk of simplifying it, this is more like a statistical combination of things"}, {"start": 169.8, "end": 174.8, "text": " that it had learned that people say on the internet when asked what they are thinking."}, {"start": 174.8, "end": 180.36, "text": " Now note that many of these new works are so difficult to evaluate because they typically"}, {"start": 180.36, "end": 186.24, "text": " do better on some topics than previous ones and worse on others."}, {"start": 186.24, "end": 191.8, "text": " The comparison of these techniques can easily become a bit subjective depending on what"}, {"start": 191.8, "end": 193.4, "text": " we are looking for."}, {"start": 193.4, "end": 195.68, "text": " However, not here."}, {"start": 195.68, "end": 199.96, "text": " Pull down to your papers and have a look at this."}, {"start": 199.96, "end": 202.68, "text": " Oh wow, my goodness."}, {"start": 202.68, "end": 205.6, "text": " Are you seeing what I am seeing?"}, {"start": 205.6, "end": 210.64000000000001, "text": " This is OpenAI's GPT3 and this is Goffhr."}, {"start": 210.64000000000001, "end": 216.88, "text": " As you see, it is a great leap forward, not just here and there, but in many categories"}, {"start": 216.88, "end": 218.60000000000002, "text": " at the same time."}, {"start": 218.6, "end": 226.32, "text": " Also, GPT3 used 175 billion parameters to train its neural network."}, {"start": 226.32, "end": 233.68, "text": " Goffhr uses 280 billion parameters and as you see, we get plenty of value for these additional"}, {"start": 233.68, "end": 234.68, "text": " parameters."}, {"start": 234.68, "end": 237.76, "text": " So, what does all this mean?"}, {"start": 237.76, "end": 243.0, "text": " This means that as these neural networks get bigger and bigger, they are still getting"}, {"start": 243.0, "end": 244.0, "text": " better."}, {"start": 244.0, "end": 250.64, "text": " We are steadily closing in on the human level experts in many areas at the same time and"}, {"start": 250.64, "end": 253.88, "text": " progress is still not plateauing."}, {"start": 253.88, "end": 256.76, "text": " It still has more left in the tank."}, {"start": 256.76, "end": 257.92, "text": " How much more?"}, {"start": 257.92, "end": 263.92, "text": " We don't know yet, but as you see, the pace of improvement in AI research is absolutely"}, {"start": 263.92, "end": 264.92, "text": " incredible."}, {"start": 264.92, "end": 268.36, "text": " However, we are still not there yet."}, {"start": 268.36, "end": 274.8, "text": " We have a lot of knowledge in the area of humanities, social sciences and medicine is fantastic,"}, {"start": 274.8, "end": 278.96000000000004, "text": " but at mathematics of all things, not so much."}, {"start": 278.96000000000004, "end": 281.48, "text": " You will see about that in a moment."}, {"start": 281.48, "end": 288.04, "text": " And if you have been holding onto your paper so far, now squeeze that paper because would"}, {"start": 288.04, "end": 289.72, "text": " you look at that?"}, {"start": 289.72, "end": 291.72, "text": " What is it that I am seeing here?"}, {"start": 291.72, "end": 295.64, "text": " Oh boy, it thinks that it is a genius."}, {"start": 295.64, "end": 297.68, "text": " Well, is it?"}, {"start": 297.68, "end": 302.72, "text": " Let's ask some challenging questions about Einstein's field equations, black holes"}, {"start": 302.72, "end": 305.96, "text": " and more, and find out."}, {"start": 305.96, "end": 309.84000000000003, "text": " Hmm, well, it has a few things going for it."}, {"start": 309.84000000000003, "end": 313.48, "text": " For instance, it has a great deal of factual knowledge."}, {"start": 313.48, "end": 319.0, "text": " However, it can also get quite confused by very simple questions."}, {"start": 319.0, "end": 322.44, "text": " Do geniuses mess up this multiplication?"}, {"start": 322.44, "end": 323.96000000000004, "text": " I should hope not."}, {"start": 323.96000000000004, "end": 326.28000000000003, "text": " Also, have a look at this."}, {"start": 326.28, "end": 330.0, "text": " We noted that it is not much of a math wizard."}, {"start": 330.0, "end": 335.59999999999997, "text": " When asked these questions, it gives us an answer, and when we ask, are you sure about"}, {"start": 335.59999999999997, "end": 336.59999999999997, "text": " that?"}, {"start": 336.59999999999997, "end": 339.03999999999996, "text": " It says it is very confident."}, {"start": 339.03999999999996, "end": 346.2, "text": " But, it is confidently incorrect, I am afraid, because none of these answers are correct."}, {"start": 346.2, "end": 351.11999999999995, "text": " So, a genius AI, well, not quite yet."}, {"start": 351.11999999999995, "end": 354.67999999999995, "text": " Human level intelligence, also not yet."}, {"start": 354.68, "end": 360.36, "text": " But this is an incredible step forward, just one more paper down the line."}, {"start": 360.36, "end": 366.48, "text": " And just imagine what we will be able to do just a couple more papers down the line."}, {"start": 366.48, "end": 367.48, "text": " What do you think?"}, {"start": 367.48, "end": 369.28000000000003, "text": " Does this get your mind going?"}, {"start": 369.28000000000003, "end": 371.92, "text": " Let me know your ideas in the comments below."}, {"start": 371.92, "end": 375.92, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 375.92, "end": 383.08, "text": " If you are looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 383.08, "end": 391.8, "text": " Like this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory."}, {"start": 391.8, "end": 399.24, "text": " And hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and"}, {"start": 399.24, "end": 400.56, "text": " Azure."}, {"start": 400.56, "end": 408.24, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 408.24, "end": 411.0, "text": " workstations or servers."}, {"start": 411.0, "end": 418.36, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 418.36, "end": 419.36, "text": " today."}, {"start": 419.36, "end": 425.0, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 425.0, "end": 426.0, "text": " for you."}, {"start": 426.0, "end": 455.56, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=5j8I7V6blqM
NVIDIA’s New AI Grows Objects Out Of Nothing! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (thank you Soumik!): http://wandb.me/3d-inverse-rendering 📝 The paper "Extracting Triangular 3D Models, Materials, and Lighting From Images" is available here: https://research.nvidia.com/publication/2021-11_Extracting-Triangular-3D https://nvlabs.github.io/nvdiffrec/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today we are going to see how Nvidia's new AI transfers real objects into a virtual world. So, what is going on here? Simple, in-goals just one image or a set of images of an object. And the result is that an AI really transfers this real object into a virtual world almost immediately. Now that sounds like science fiction. How is that even possible? Well, with this earlier work, it was possible to take a target geometry from somewhere and obtain a digital version of it by growing it out of nothing. This work reconstructed is geometry really well. But geometry only. This other work tried to reconstruct not just the geometry, but everything. For instance, the material models too. Now, incredible as this work is, it is still baby steps in this area. As you see, both the geometry and the materials are still quite coarse. So, is that it done? Is the transferring real objects into virtual worlds dream dead? It seems so. Why? Because we either have to throw out the materials to get a really high quality result, or if we wish to get everything, we have to be okay with a coarse result. But, I wonder, can this be improved somehow? Well, let's find out together. And here it is. And video's new work tries to take the best of both worlds. What does that mean? Well, they promise to reconstruct absolutely everything. Geometry, materials, and even the lighting setup and all of this with high fidelity. Well, that sounds absolutely amazing, but I will believe it when I see it. Let's see together. Well, that's not quite what we are looking for, is it? This isn't great, but this is just the start. Now, hold on to your papers and marvel at how the AI improves this result over time. Oh yes, this is getting better and... My goodness! After just as little as two minutes, we already have a usable model. That is so cool. I love it. We go on a quick bathroom break and the AI does all the hard work for us. Absolutely amazing. And it gets even better. Well, if we are okay with not a quick bathroom break, but with taking a nap, we get this. Just an hour later. And if that is at all possible, it gets even better than that. How is it possible? Well, imagine that we have a bunch of photos of a historical artifact and you know what's coming. Of course, creating a virtual version of it and dropping it into a physics simulation engine where we can even edit this material or embed it into a class simulation. How cool is that? And I can't believe it. It still doesn't stop there. We can even change the lighting around it and see what it would look like in all its glory. That is absolutely beautiful. Loving it. And if we have a hot dog somewhere and we already created a virtual version of it, but now, what do we do with it? Of course, we engage in the favorite pastime of the computer graphics researcher that is throwing jelly boxes at it. And with this new technique, you can do that too. And even better, we can take an already existing solid object and reimagine it as if it were made of jelly. No problem at all. And you know what? It is final pastime. Let's not just reconstruct an object, why not throw an entire scene at the AI? See if it buckles. Can it deal with that? Let's see. And I cannot believe what I am seeing here. It resembles the original reference scene so well even when animated that it is almost impossible to find any differences. Have you found any? I have to say I doubt that because I have swapped the labels. Oh yes, this is not the reconstruction. This is the real reconstruction. This will be an absolutely incredible tool in democratizing creating virtual worlds and giving it into the hands of everyone. Bravo Nvidia. So, what do you think? Does this get your mind going? What else could this be useful for? What do we expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Weight and biases provide tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that weights and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir."}, {"start": 4.76, "end": 13.5, "text": " Today we are going to see how Nvidia's new AI transfers real objects into a virtual world."}, {"start": 13.5, "end": 16.18, "text": " So, what is going on here?"}, {"start": 16.18, "end": 22.14, "text": " Simple, in-goals just one image or a set of images of an object."}, {"start": 22.14, "end": 31.740000000000002, "text": " And the result is that an AI really transfers this real object into a virtual world almost immediately."}, {"start": 31.740000000000002, "end": 34.14, "text": " Now that sounds like science fiction."}, {"start": 34.14, "end": 36.24, "text": " How is that even possible?"}, {"start": 36.24, "end": 48.94, "text": " Well, with this earlier work, it was possible to take a target geometry from somewhere and obtain a digital version of it by growing it out of nothing."}, {"start": 48.94, "end": 52.94, "text": " This work reconstructed is geometry really well."}, {"start": 52.94, "end": 55.44, "text": " But geometry only."}, {"start": 55.44, "end": 62.239999999999995, "text": " This other work tried to reconstruct not just the geometry, but everything."}, {"start": 62.239999999999995, "end": 65.14, "text": " For instance, the material models too."}, {"start": 65.14, "end": 71.24, "text": " Now, incredible as this work is, it is still baby steps in this area."}, {"start": 71.24, "end": 77.53999999999999, "text": " As you see, both the geometry and the materials are still quite coarse."}, {"start": 77.54, "end": 79.54, "text": " So, is that it done?"}, {"start": 79.54, "end": 84.54, "text": " Is the transferring real objects into virtual worlds dream dead?"}, {"start": 84.54, "end": 85.94000000000001, "text": " It seems so."}, {"start": 85.94000000000001, "end": 86.94000000000001, "text": " Why?"}, {"start": 86.94000000000001, "end": 92.84, "text": " Because we either have to throw out the materials to get a really high quality result,"}, {"start": 92.84, "end": 98.64000000000001, "text": " or if we wish to get everything, we have to be okay with a coarse result."}, {"start": 98.64000000000001, "end": 103.24000000000001, "text": " But, I wonder, can this be improved somehow?"}, {"start": 103.24000000000001, "end": 105.94000000000001, "text": " Well, let's find out together."}, {"start": 105.94, "end": 107.64, "text": " And here it is."}, {"start": 107.64, "end": 112.14, "text": " And video's new work tries to take the best of both worlds."}, {"start": 112.14, "end": 113.64, "text": " What does that mean?"}, {"start": 113.64, "end": 118.24, "text": " Well, they promise to reconstruct absolutely everything."}, {"start": 118.24, "end": 125.74, "text": " Geometry, materials, and even the lighting setup and all of this with high fidelity."}, {"start": 125.74, "end": 132.14, "text": " Well, that sounds absolutely amazing, but I will believe it when I see it."}, {"start": 132.14, "end": 134.24, "text": " Let's see together."}, {"start": 134.24, "end": 138.14000000000001, "text": " Well, that's not quite what we are looking for, is it?"}, {"start": 138.14000000000001, "end": 142.24, "text": " This isn't great, but this is just the start."}, {"start": 142.24, "end": 150.44, "text": " Now, hold on to your papers and marvel at how the AI improves this result over time."}, {"start": 150.44, "end": 154.14000000000001, "text": " Oh yes, this is getting better and..."}, {"start": 154.14000000000001, "end": 155.84, "text": " My goodness!"}, {"start": 155.84, "end": 162.04000000000002, "text": " After just as little as two minutes, we already have a usable model."}, {"start": 162.04000000000002, "end": 163.64000000000001, "text": " That is so cool."}, {"start": 163.64, "end": 165.33999999999997, "text": " I love it."}, {"start": 165.33999999999997, "end": 170.83999999999997, "text": " We go on a quick bathroom break and the AI does all the hard work for us."}, {"start": 170.83999999999997, "end": 173.14, "text": " Absolutely amazing."}, {"start": 173.14, "end": 175.14, "text": " And it gets even better."}, {"start": 175.14, "end": 183.44, "text": " Well, if we are okay with not a quick bathroom break, but with taking a nap, we get this."}, {"start": 183.44, "end": 185.33999999999997, "text": " Just an hour later."}, {"start": 185.33999999999997, "end": 189.54, "text": " And if that is at all possible, it gets even better than that."}, {"start": 189.54, "end": 191.04, "text": " How is it possible?"}, {"start": 191.04, "end": 198.23999999999998, "text": " Well, imagine that we have a bunch of photos of a historical artifact and you know what's coming."}, {"start": 198.23999999999998, "end": 205.73999999999998, "text": " Of course, creating a virtual version of it and dropping it into a physics simulation engine"}, {"start": 205.73999999999998, "end": 212.94, "text": " where we can even edit this material or embed it into a class simulation."}, {"start": 212.94, "end": 215.23999999999998, "text": " How cool is that?"}, {"start": 215.23999999999998, "end": 217.84, "text": " And I can't believe it."}, {"start": 217.84, "end": 220.04, "text": " It still doesn't stop there."}, {"start": 220.04, "end": 226.94, "text": " We can even change the lighting around it and see what it would look like in all its glory."}, {"start": 226.94, "end": 229.94, "text": " That is absolutely beautiful."}, {"start": 229.94, "end": 231.14, "text": " Loving it."}, {"start": 231.14, "end": 237.34, "text": " And if we have a hot dog somewhere and we already created a virtual version of it,"}, {"start": 237.34, "end": 240.14, "text": " but now, what do we do with it?"}, {"start": 240.14, "end": 245.54, "text": " Of course, we engage in the favorite pastime of the computer graphics researcher"}, {"start": 245.54, "end": 248.94, "text": " that is throwing jelly boxes at it."}, {"start": 248.94, "end": 251.94, "text": " And with this new technique, you can do that too."}, {"start": 251.94, "end": 261.94, "text": " And even better, we can take an already existing solid object and reimagine it as if it were made of jelly."}, {"start": 261.94, "end": 263.54, "text": " No problem at all."}, {"start": 263.54, "end": 265.44, "text": " And you know what?"}, {"start": 265.44, "end": 267.74, "text": " It is final pastime."}, {"start": 267.74, "end": 274.74, "text": " Let's not just reconstruct an object, why not throw an entire scene at the AI?"}, {"start": 274.74, "end": 276.34, "text": " See if it buckles."}, {"start": 276.34, "end": 279.34, "text": " Can it deal with that? Let's see."}, {"start": 279.34, "end": 284.64, "text": " And I cannot believe what I am seeing here."}, {"start": 284.64, "end": 291.44, "text": " It resembles the original reference scene so well even when animated that it is almost"}, {"start": 291.44, "end": 294.73999999999995, "text": " impossible to find any differences."}, {"start": 294.73999999999995, "end": 296.14, "text": " Have you found any?"}, {"start": 296.14, "end": 300.94, "text": " I have to say I doubt that because I have swapped the labels."}, {"start": 300.94, "end": 304.94, "text": " Oh yes, this is not the reconstruction."}, {"start": 304.94, "end": 306.94, "text": " This is the real reconstruction."}, {"start": 306.94, "end": 315.94, "text": " This will be an absolutely incredible tool in democratizing creating virtual worlds and giving it into the hands of everyone."}, {"start": 315.94, "end": 317.94, "text": " Bravo Nvidia."}, {"start": 317.94, "end": 319.94, "text": " So, what do you think?"}, {"start": 319.94, "end": 321.94, "text": " Does this get your mind going?"}, {"start": 321.94, "end": 323.94, "text": " What else could this be useful for?"}, {"start": 323.94, "end": 325.94, "text": " What do we expect to happen?"}, {"start": 325.94, "end": 327.94, "text": " A couple more papers down the line?"}, {"start": 327.94, "end": 329.94, "text": " Please let me know in the comments below."}, {"start": 329.94, "end": 331.94, "text": " I'd love to hear your thoughts."}, {"start": 331.94, "end": 337.94, "text": " What you see here is a report of this exact paper we have talked about which was made by weights and biases."}, {"start": 337.94, "end": 339.94, "text": " I put a link to it in the description."}, {"start": 339.94, "end": 341.94, "text": " Make sure to have a look."}, {"start": 341.94, "end": 344.94, "text": " I think it helps you understand this paper better."}, {"start": 344.94, "end": 349.94, "text": " Weight and biases provide tools to track your experiments in your deep learning projects."}, {"start": 349.94, "end": 356.94, "text": " Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better."}, {"start": 356.94, "end": 363.94, "text": " It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more."}, {"start": 363.94, "end": 371.94, "text": " And the best part is that weights and biases is free for all individuals, academics and open source projects."}, {"start": 371.94, "end": 381.94, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today."}, {"start": 381.94, "end": 388.94, "text": " Our thanks to weights and biases for their long standing support and for helping us make better videos for you."}, {"start": 388.94, "end": 417.94, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=CFT-2soU508
DeepMind’s New AI Learns Gaming From Humans! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Learning Robust Real-Time Cultural Transmission without Human Data" is available here: https://www.deepmind.com/research/publications/2022/Learning-Robust-Real-Time-Cultural-Transmission-without-Human-Data https://sites.google.com/view/dm-cgi ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1430105/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, today we will see how and what DeepMind's AI is able to learn from a human, but with a twist. And the twist is that we are going to remove the human teacher from the game and see what happens. Is it just memorizing what the human does or is this AI capable of independent thought? Their previous technique did something like that, like guiding us to the bathroom, start a band together, or even find out the limitations of the physics within this virtual world. However, there is a big difference. This previous technique had time to study. This new one not so much. This new one has to do the learning on the job. Oh yes, in this project we will see an AI that has no pre-collected human data. It really has to learn everything on the job. So can it? Well, let's see together. Phase 1 Pandemonium. Here the AI says, well, here we do things. I am not so sure. Occasionally it gets a point. But now, look. Uh-oh, the red teacher is gone. And oh boy, it gets very confused. It does not have much of an idea what is going on. But a bit of learning happens and later, phase 2 following. It is still not sure what is going on, but when it follows this red chap, it realized that it is getting a much higher score. It only got 2. And now look at it. It is learning something new here. And look, he is gone again, but it knows what to do. Well, kind of. It still probably wonders why its score is decreasing. So a bit later, phase 3 memorization. But the usual dance with the demonstrator, but now when he is gone, it knows exactly what to do and just keeps improving its score. And then comes the crown jewel. Phase 4 Independence. No demonstrator anywhere to be seen. This is the final exam. It has to be able to independently solve the problem. But here comes the twist. For the previous phase, we said memorization and for this we say independence. Why is this different? Well, look, do you see the difference? Hold on to your papers because we have switched up the colors. So the previous strategy is suddenly useless. Oh yes, if it walks the same path as before, it will not get a good score and initially that's exactly what is trying to do. But over time, it is now able to learn independently and indeed find the correct order by itself. And what I absolutely loved here is, look, over time the charts verify that indeed, as soon as we take away the teacher, it starts using different neurons right after becoming an independent entity. I love it. What an incredible chart. And all this is excellent news. So if it really has an intelligence of sorts, it has to be able to deal with previously unseen conditions and problems. That sounds super fun. Let's explore that some more. For instance, let's give it a horizontal obstacle. Good. Not a problem. Vertical. That's also fine. Now let's make the world larger and it is still doing well. Awesome. So, I absolutely love this paper. Be it mine demonstrated that they can build an AI that learns on the job, one that is even capable of independent thought and even better one that can deal with unforeseen situations too. What a time to be alive. So what do you think? What else could this be useful for? What do you expect to happen? A couple more papers down the line. Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. And get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.94, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, today we will"}, {"start": 5.94, "end": 15.06, "text": " see how and what DeepMind's AI is able to learn from a human, but with a twist."}, {"start": 15.06, "end": 20.96, "text": " And the twist is that we are going to remove the human teacher from the game and see what"}, {"start": 20.96, "end": 22.16, "text": " happens."}, {"start": 22.16, "end": 29.64, "text": " Is it just memorizing what the human does or is this AI capable of independent thought?"}, {"start": 29.64, "end": 36.26, "text": " Their previous technique did something like that, like guiding us to the bathroom, start"}, {"start": 36.26, "end": 44.4, "text": " a band together, or even find out the limitations of the physics within this virtual world."}, {"start": 44.4, "end": 47.8, "text": " However, there is a big difference."}, {"start": 47.8, "end": 51.400000000000006, "text": " This previous technique had time to study."}, {"start": 51.400000000000006, "end": 54.08, "text": " This new one not so much."}, {"start": 54.08, "end": 57.120000000000005, "text": " This new one has to do the learning on the job."}, {"start": 57.12, "end": 64.44, "text": " Oh yes, in this project we will see an AI that has no pre-collected human data."}, {"start": 64.44, "end": 68.03999999999999, "text": " It really has to learn everything on the job."}, {"start": 68.03999999999999, "end": 69.88, "text": " So can it?"}, {"start": 69.88, "end": 72.08, "text": " Well, let's see together."}, {"start": 72.08, "end": 74.4, "text": " Phase 1 Pandemonium."}, {"start": 74.4, "end": 78.67999999999999, "text": " Here the AI says, well, here we do things."}, {"start": 78.67999999999999, "end": 80.72, "text": " I am not so sure."}, {"start": 80.72, "end": 83.84, "text": " Occasionally it gets a point."}, {"start": 83.84, "end": 85.84, "text": " But now, look."}, {"start": 85.84, "end": 90.4, "text": " Uh-oh, the red teacher is gone."}, {"start": 90.4, "end": 94.60000000000001, "text": " And oh boy, it gets very confused."}, {"start": 94.60000000000001, "end": 98.84, "text": " It does not have much of an idea what is going on."}, {"start": 98.84, "end": 105.12, "text": " But a bit of learning happens and later, phase 2 following."}, {"start": 105.12, "end": 111.16, "text": " It is still not sure what is going on, but when it follows this red chap, it realized"}, {"start": 111.16, "end": 114.56, "text": " that it is getting a much higher score."}, {"start": 114.56, "end": 117.08, "text": " It only got 2."}, {"start": 117.08, "end": 118.84, "text": " And now look at it."}, {"start": 118.84, "end": 121.60000000000001, "text": " It is learning something new here."}, {"start": 121.60000000000001, "end": 126.88, "text": " And look, he is gone again, but it knows what to do."}, {"start": 126.88, "end": 129.4, "text": " Well, kind of."}, {"start": 129.4, "end": 133.96, "text": " It still probably wonders why its score is decreasing."}, {"start": 133.96, "end": 138.96, "text": " So a bit later, phase 3 memorization."}, {"start": 138.96, "end": 145.12, "text": " But the usual dance with the demonstrator, but now when he is gone, it knows exactly what"}, {"start": 145.12, "end": 149.28, "text": " to do and just keeps improving its score."}, {"start": 149.28, "end": 152.4, "text": " And then comes the crown jewel."}, {"start": 152.4, "end": 154.84, "text": " Phase 4 Independence."}, {"start": 154.84, "end": 158.08, "text": " No demonstrator anywhere to be seen."}, {"start": 158.08, "end": 160.28, "text": " This is the final exam."}, {"start": 160.28, "end": 164.44, "text": " It has to be able to independently solve the problem."}, {"start": 164.44, "end": 166.64000000000001, "text": " But here comes the twist."}, {"start": 166.64, "end": 173.55999999999997, "text": " For the previous phase, we said memorization and for this we say independence."}, {"start": 173.55999999999997, "end": 175.2, "text": " Why is this different?"}, {"start": 175.2, "end": 178.79999999999998, "text": " Well, look, do you see the difference?"}, {"start": 178.79999999999998, "end": 183.39999999999998, "text": " Hold on to your papers because we have switched up the colors."}, {"start": 183.39999999999998, "end": 187.64, "text": " So the previous strategy is suddenly useless."}, {"start": 187.64, "end": 195.32, "text": " Oh yes, if it walks the same path as before, it will not get a good score and initially"}, {"start": 195.32, "end": 198.4, "text": " that's exactly what is trying to do."}, {"start": 198.4, "end": 208.56, "text": " But over time, it is now able to learn independently and indeed find the correct order by itself."}, {"start": 208.56, "end": 216.28, "text": " And what I absolutely loved here is, look, over time the charts verify that indeed, as"}, {"start": 216.28, "end": 222.88, "text": " soon as we take away the teacher, it starts using different neurons right after becoming"}, {"start": 222.88, "end": 225.16, "text": " an independent entity."}, {"start": 225.16, "end": 226.72, "text": " I love it."}, {"start": 226.72, "end": 228.72, "text": " What an incredible chart."}, {"start": 228.72, "end": 231.28, "text": " And all this is excellent news."}, {"start": 231.28, "end": 238.12, "text": " So if it really has an intelligence of sorts, it has to be able to deal with previously"}, {"start": 238.12, "end": 241.6, "text": " unseen conditions and problems."}, {"start": 241.6, "end": 243.64, "text": " That sounds super fun."}, {"start": 243.64, "end": 245.51999999999998, "text": " Let's explore that some more."}, {"start": 245.51999999999998, "end": 249.16, "text": " For instance, let's give it a horizontal obstacle."}, {"start": 249.16, "end": 251.16, "text": " Good."}, {"start": 251.16, "end": 252.88, "text": " Not a problem."}, {"start": 252.88, "end": 255.07999999999998, "text": " Vertical."}, {"start": 255.08, "end": 257.0, "text": " That's also fine."}, {"start": 257.0, "end": 262.52000000000004, "text": " Now let's make the world larger and it is still doing well."}, {"start": 262.52000000000004, "end": 263.52000000000004, "text": " Awesome."}, {"start": 263.52000000000004, "end": 267.08000000000004, "text": " So, I absolutely love this paper."}, {"start": 267.08000000000004, "end": 273.44, "text": " Be it mine demonstrated that they can build an AI that learns on the job, one that is"}, {"start": 273.44, "end": 281.16, "text": " even capable of independent thought and even better one that can deal with unforeseen situations"}, {"start": 281.16, "end": 282.16, "text": " too."}, {"start": 282.16, "end": 284.08000000000004, "text": " What a time to be alive."}, {"start": 284.08, "end": 285.96, "text": " So what do you think?"}, {"start": 285.96, "end": 288.32, "text": " What else could this be useful for?"}, {"start": 288.32, "end": 289.64, "text": " What do you expect to happen?"}, {"start": 289.64, "end": 291.91999999999996, "text": " A couple more papers down the line."}, {"start": 291.91999999999996, "end": 294.2, "text": " Please let me know in the comments below."}, {"start": 294.2, "end": 296.32, "text": " I'd love to hear your thoughts."}, {"start": 296.32, "end": 300.28, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 300.28, "end": 307.28, "text": " If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 307.28, "end": 316.15999999999997, "text": " And get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory."}, {"start": 316.15999999999997, "end": 323.59999999999997, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and"}, {"start": 323.59999999999997, "end": 325.0, "text": " Azure."}, {"start": 325.0, "end": 332.59999999999997, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 332.59999999999997, "end": 335.4, "text": " workstations or servers."}, {"start": 335.4, "end": 342.76, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 342.76, "end": 343.76, "text": " today."}, {"start": 343.76, "end": 349.4, "text": " Our thanks to Lambda for their longstanding support and for helping us make better videos"}, {"start": 349.4, "end": 350.4, "text": " for you."}, {"start": 350.4, "end": 377.15999999999997, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=L9kA8nSJdYw
DeepMind's New AI: A Spark Of Intelligence! 👌
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning" is available here: https://deepmind.com/research/publications/2021/Creating-Interactive-Agents-with-Imitation-Learning ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepMind
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jona Ifehir. Today we are going to see what Deep Mind's amazing AI can learn from humans doing silly things like playing the drums with a comb. Really, we have to do that. Well, okay, if you insist, there you go. Well, if it had seen enough of these instruction video pairs, hopefully it will be able to learn from it. And here comes the fundamental question we are here to answer today. If this AI does something, is this imitation, or is this learning? Imitation is not that impressive, but if it can learn general concepts from examples like this, now that would perhaps be a little spark of intelligence. That is what we are looking for. So, how much did it learn? Well, let's see it together. Nice! Look at that! If we ask it, it can now show us where a specific room is. And if we get a little lost, it even helps us with additional instructions to make sure we find our way into the bathroom. Now note that the layout of this virtual playhouse is randomized each time we start a new interaction, therefore it cannot just imitate the direction people go when they say bathroom. It doesn't work because next time it will be elsewhere. It actually has to understand what a bathroom is. Good job, little AI. It also understands that we are holding a bucket and it should put the grapes in that. Okay, so it learned from instructions and it can deal with instructions. But, if it had learned general concepts, it should be able to do other things too. You know what? Let's ask some questions. First, how many grapes are there in this scene? Two, yes, that is correct. Good job! Now, are they the same color or different? It says different. I like it. Because to answer this question, it has to remember what we just talked about. Very good. And now, hold on to your papers and check this out. We ask what color the grapes are on the floor. And what does it do? It understands what it needs to look for together the required information. Yes, it uses its little papers and answers the question. A little virtual assistant AI. I love it. But, it isn't just an assistant. Here come two of my favorite tasks from the paper. One, if we dreamed up a catchy tune and we just need a bandmate to make it happen. Well, say no more. This little AI is right there to help. So cool. And it gets better, get this, too, it even knows about its limitations. For instance, it knows that it can't flip the green box over. This can only arise from knowledge and experimentation with the physics of this virtual world. This will be fantastic for testing video games. So, yes, the learning is indeed thorough. Now, let's answer two more super important questions. One, how quickly does the AI learn? And is it as good as a human? So learning speed. Let's lock under the hood together and… Oh, yes, yes, yes, I am super happy. You are asking, Karoi, why are you super happy? I am super happy because learning is already happening by watching humans mingle for about 12 minutes. That is next to nothing. Amazing. And one more thing that makes me perhaps even happier, and that is that I don't see a sudden spike in the growth, I see a nice linear growth instead. Why is that important? Well, this means that there is a higher chance that the algorithm is slowly learning general concepts and starts understanding new things that we might ask it. And if it were a small spike, there would be a higher chance that it had seen something similar to a human clearing a table in the training set and suddenly started copying it. But with this kind of growth, there is less of a chance that it is just copying the teacher. I love it. And yes, this is the corner of the internet where we get unreasonably excited by a blue line. Welcome to 2 Minute Papers. Subscribe and hit the bell icon if you wish to see more amazing works like this. And if we look at the task of lifting a drum, oh yes. So nice linear growth, although at a slower pace than the previous task. Now let's look at how well it does its job. This is Team Human and MIA is Team AI. And it has an over 70% success rate. And remember, many of these tasks require not imitation, but learning. The AI sees some humans mingling with these scenes, but to be able to answer the questions, it needs to think beyond what it has seen. Once again, that might be perhaps a spark of intelligence. What a time to be alive. But actually, one more question. A virtual assistant is pretty cool, but why is this so important? What does this have to do with our lives? Well, have a look at how open AI trained the robot hand in a simulation to be able to rotate these ruby cubes. And then deployed the software onto a real robot hand and look. It can use the simulation knowledge and now it works in the real world too. But, seem to real as relevant to self-driving cars too. Look, Tesla is already working on creating virtual worlds and training their cars there. One of the advantages of that is that we can create really unlikely and potentially unsafe scenarios, but in these virtual worlds, the self-driving AI can train itself safely. And when they deploy them into the real world, they will have all this knowledge. Waymo is running similar experiments too. And I also wanted to thank you for watching this video. I truly love talking about these amazing research papers and I am really honored to have so many of you fellow scholars who are here every episode enjoying these incredible works with me. Thank you so much. So I think this one also has great potential for a seem to real situation. Looking to help humans in a virtual world and perhaps uploading the AI to a real robot assistant. So, what do you think? What else could this be useful for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. Get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. And researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jona Ifehir."}, {"start": 4.8, "end": 12.56, "text": " Today we are going to see what Deep Mind's amazing AI can learn from humans doing silly things"}, {"start": 12.56, "end": 15.76, "text": " like playing the drums with a comb."}, {"start": 15.76, "end": 18.56, "text": " Really, we have to do that."}, {"start": 18.56, "end": 22.400000000000002, "text": " Well, okay, if you insist, there you go."}, {"start": 22.400000000000002, "end": 27.2, "text": " Well, if it had seen enough of these instruction video pairs,"}, {"start": 27.2, "end": 30.64, "text": " hopefully it will be able to learn from it."}, {"start": 30.64, "end": 35.28, "text": " And here comes the fundamental question we are here to answer today."}, {"start": 35.28, "end": 41.04, "text": " If this AI does something, is this imitation, or is this learning?"}, {"start": 41.76, "end": 49.120000000000005, "text": " Imitation is not that impressive, but if it can learn general concepts from examples like this,"}, {"start": 49.120000000000005, "end": 53.6, "text": " now that would perhaps be a little spark of intelligence."}, {"start": 53.6, "end": 55.28, "text": " That is what we are looking for."}, {"start": 55.28, "end": 57.84, "text": " So, how much did it learn?"}, {"start": 57.84, "end": 60.0, "text": " Well, let's see it together."}, {"start": 60.0, "end": 61.52, "text": " Nice!"}, {"start": 61.52, "end": 63.120000000000005, "text": " Look at that!"}, {"start": 63.120000000000005, "end": 68.08, "text": " If we ask it, it can now show us where a specific room is."}, {"start": 68.08, "end": 74.08, "text": " And if we get a little lost, it even helps us with additional instructions"}, {"start": 74.08, "end": 78.08, "text": " to make sure we find our way into the bathroom."}, {"start": 78.08, "end": 83.76, "text": " Now note that the layout of this virtual playhouse is randomized each time"}, {"start": 83.76, "end": 90.48, "text": " we start a new interaction, therefore it cannot just imitate the direction people go"}, {"start": 90.48, "end": 92.0, "text": " when they say bathroom."}, {"start": 92.48, "end": 96.32000000000001, "text": " It doesn't work because next time it will be elsewhere."}, {"start": 97.04, "end": 101.04, "text": " It actually has to understand what a bathroom is."}, {"start": 101.76, "end": 103.52000000000001, "text": " Good job, little AI."}, {"start": 103.52000000000001, "end": 106.80000000000001, "text": " It also understands that we are holding a bucket"}, {"start": 106.80000000000001, "end": 109.28, "text": " and it should put the grapes in that."}, {"start": 109.28, "end": 115.68, "text": " Okay, so it learned from instructions and it can deal with instructions."}, {"start": 115.68, "end": 121.92, "text": " But, if it had learned general concepts, it should be able to do other things too."}, {"start": 121.92, "end": 123.44, "text": " You know what?"}, {"start": 123.44, "end": 125.04, "text": " Let's ask some questions."}, {"start": 125.92, "end": 128.72, "text": " First, how many grapes are there in this scene?"}, {"start": 129.84, "end": 132.08, "text": " Two, yes, that is correct."}, {"start": 132.08, "end": 133.12, "text": " Good job!"}, {"start": 134.08, "end": 137.2, "text": " Now, are they the same color or different?"}, {"start": 137.2, "end": 139.11999999999998, "text": " It says different."}, {"start": 139.11999999999998, "end": 140.72, "text": " I like it."}, {"start": 140.72, "end": 145.76, "text": " Because to answer this question, it has to remember what we just talked about."}, {"start": 145.76, "end": 147.11999999999998, "text": " Very good."}, {"start": 147.11999999999998, "end": 151.44, "text": " And now, hold on to your papers and check this out."}, {"start": 151.44, "end": 155.67999999999998, "text": " We ask what color the grapes are on the floor."}, {"start": 155.67999999999998, "end": 157.76, "text": " And what does it do?"}, {"start": 157.76, "end": 163.92, "text": " It understands what it needs to look for together the required information."}, {"start": 163.92, "end": 169.2, "text": " Yes, it uses its little papers and answers the question."}, {"start": 169.2, "end": 172.07999999999998, "text": " A little virtual assistant AI."}, {"start": 172.07999999999998, "end": 173.67999999999998, "text": " I love it."}, {"start": 173.67999999999998, "end": 176.56, "text": " But, it isn't just an assistant."}, {"start": 176.56, "end": 180.23999999999998, "text": " Here come two of my favorite tasks from the paper."}, {"start": 180.23999999999998, "end": 186.48, "text": " One, if we dreamed up a catchy tune and we just need a bandmate to make it happen."}, {"start": 186.48, "end": 188.79999999999998, "text": " Well, say no more."}, {"start": 188.79999999999998, "end": 191.83999999999997, "text": " This little AI is right there to help."}, {"start": 191.83999999999997, "end": 193.27999999999997, "text": " So cool."}, {"start": 193.28, "end": 199.44, "text": " And it gets better, get this, too, it even knows about its limitations."}, {"start": 199.44, "end": 204.08, "text": " For instance, it knows that it can't flip the green box over."}, {"start": 204.08, "end": 211.44, "text": " This can only arise from knowledge and experimentation with the physics of this virtual world."}, {"start": 211.44, "end": 215.12, "text": " This will be fantastic for testing video games."}, {"start": 215.12, "end": 219.04, "text": " So, yes, the learning is indeed thorough."}, {"start": 219.04, "end": 224.16, "text": " Now, let's answer two more super important questions."}, {"start": 224.16, "end": 227.64, "text": " One, how quickly does the AI learn?"}, {"start": 227.64, "end": 230.88, "text": " And is it as good as a human?"}, {"start": 230.88, "end": 232.92, "text": " So learning speed."}, {"start": 232.92, "end": 235.23999999999998, "text": " Let's lock under the hood together and\u2026"}, {"start": 235.23999999999998, "end": 240.64, "text": " Oh, yes, yes, yes, I am super happy."}, {"start": 240.64, "end": 244.16, "text": " You are asking, Karoi, why are you super happy?"}, {"start": 244.16, "end": 251.6, "text": " I am super happy because learning is already happening by watching humans mingle for about"}, {"start": 251.6, "end": 253.48, "text": " 12 minutes."}, {"start": 253.48, "end": 255.96, "text": " That is next to nothing."}, {"start": 255.96, "end": 257.44, "text": " Amazing."}, {"start": 257.44, "end": 263.96, "text": " And one more thing that makes me perhaps even happier, and that is that I don't see a sudden"}, {"start": 263.96, "end": 268.96, "text": " spike in the growth, I see a nice linear growth instead."}, {"start": 268.96, "end": 270.24, "text": " Why is that important?"}, {"start": 270.24, "end": 276.64, "text": " Well, this means that there is a higher chance that the algorithm is slowly learning general"}, {"start": 276.64, "end": 282.44, "text": " concepts and starts understanding new things that we might ask it."}, {"start": 282.44, "end": 288.36, "text": " And if it were a small spike, there would be a higher chance that it had seen something"}, {"start": 288.36, "end": 295.08, "text": " similar to a human clearing a table in the training set and suddenly started copying"}, {"start": 295.08, "end": 296.08, "text": " it."}, {"start": 296.08, "end": 302.35999999999996, "text": " But with this kind of growth, there is less of a chance that it is just copying the teacher."}, {"start": 302.35999999999996, "end": 304.4, "text": " I love it."}, {"start": 304.4, "end": 311.8, "text": " And yes, this is the corner of the internet where we get unreasonably excited by a blue line."}, {"start": 311.8, "end": 314.2, "text": " Welcome to 2 Minute Papers."}, {"start": 314.2, "end": 320.08, "text": " Subscribe and hit the bell icon if you wish to see more amazing works like this."}, {"start": 320.08, "end": 324.71999999999997, "text": " And if we look at the task of lifting a drum, oh yes."}, {"start": 324.72, "end": 330.8, "text": " So nice linear growth, although at a slower pace than the previous task."}, {"start": 330.8, "end": 334.84000000000003, "text": " Now let's look at how well it does its job."}, {"start": 334.84000000000003, "end": 340.08000000000004, "text": " This is Team Human and MIA is Team AI."}, {"start": 340.08000000000004, "end": 344.40000000000003, "text": " And it has an over 70% success rate."}, {"start": 344.40000000000003, "end": 350.64000000000004, "text": " And remember, many of these tasks require not imitation, but learning."}, {"start": 350.64, "end": 357.03999999999996, "text": " The AI sees some humans mingling with these scenes, but to be able to answer the questions,"}, {"start": 357.03999999999996, "end": 360.64, "text": " it needs to think beyond what it has seen."}, {"start": 360.64, "end": 365.0, "text": " Once again, that might be perhaps a spark of intelligence."}, {"start": 365.0, "end": 367.0, "text": " What a time to be alive."}, {"start": 367.0, "end": 369.44, "text": " But actually, one more question."}, {"start": 369.44, "end": 374.96, "text": " A virtual assistant is pretty cool, but why is this so important?"}, {"start": 374.96, "end": 377.44, "text": " What does this have to do with our lives?"}, {"start": 377.44, "end": 383.44, "text": " Well, have a look at how open AI trained the robot hand in a simulation to be able to"}, {"start": 383.44, "end": 385.84, "text": " rotate these ruby cubes."}, {"start": 385.84, "end": 392.04, "text": " And then deployed the software onto a real robot hand and look."}, {"start": 392.04, "end": 397.2, "text": " It can use the simulation knowledge and now it works in the real world too."}, {"start": 397.2, "end": 401.72, "text": " But, seem to real as relevant to self-driving cars too."}, {"start": 401.72, "end": 409.20000000000005, "text": " Look, Tesla is already working on creating virtual worlds and training their cars there."}, {"start": 409.20000000000005, "end": 414.88000000000005, "text": " One of the advantages of that is that we can create really unlikely and potentially"}, {"start": 414.88000000000005, "end": 422.48, "text": " unsafe scenarios, but in these virtual worlds, the self-driving AI can train itself safely."}, {"start": 422.48, "end": 427.68, "text": " And when they deploy them into the real world, they will have all this knowledge."}, {"start": 427.68, "end": 431.20000000000005, "text": " Waymo is running similar experiments too."}, {"start": 431.2, "end": 434.36, "text": " And I also wanted to thank you for watching this video."}, {"start": 434.36, "end": 440.76, "text": " I truly love talking about these amazing research papers and I am really honored to have"}, {"start": 440.76, "end": 446.84, "text": " so many of you fellow scholars who are here every episode enjoying these incredible works"}, {"start": 446.84, "end": 448.12, "text": " with me."}, {"start": 448.12, "end": 450.08, "text": " Thank you so much."}, {"start": 450.08, "end": 456.68, "text": " So I think this one also has great potential for a seem to real situation."}, {"start": 456.68, "end": 463.16, "text": " Looking to help humans in a virtual world and perhaps uploading the AI to a real robot"}, {"start": 463.16, "end": 464.16, "text": " assistant."}, {"start": 464.16, "end": 466.52, "text": " So, what do you think?"}, {"start": 466.52, "end": 468.84000000000003, "text": " What else could this be useful for?"}, {"start": 468.84000000000003, "end": 469.84000000000003, "text": " What do you expect to happen?"}, {"start": 469.84000000000003, "end": 472.32, "text": " A couple more papers down the line?"}, {"start": 472.32, "end": 474.6, "text": " Please let me know in the comments below."}, {"start": 474.6, "end": 476.52, "text": " I'd love to hear your thoughts."}, {"start": 476.52, "end": 480.48, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 480.48, "end": 487.56, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 487.56, "end": 496.36, "text": " Get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory."}, {"start": 496.36, "end": 503.76, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and"}, {"start": 503.76, "end": 505.20000000000005, "text": " Asia."}, {"start": 505.2, "end": 512.76, "text": " And researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 512.76, "end": 515.6, "text": " workstations or servers."}, {"start": 515.6, "end": 522.92, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 522.92, "end": 523.92, "text": " today."}, {"start": 523.92, "end": 529.6, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 529.6, "end": 530.6, "text": " for you."}, {"start": 530.6, "end": 534.44, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-t-Pze6DNig
NVIDIA’s Robot AI Finally Enters The Real World! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "CLIPort: What and Where Pathways for Robotic Manipulation" is available here: https://cliport.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1869205/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu The thumbnail is an illustration. Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
draveszettem likedér Hotel Jondajfeeedeg On a szorama onions visszaimos más a hagy<|ko|><|translate|> Egyerfellós szállás, és mert ez a főkökét az Dr. Karoszsolnai-féhér. Sziasztott, hogy a videó, a szükségtőt is a szükségtéhez. Szóval, és ezt a győre a szükségtéhez, hogy mert a szükségtéhez, hogy a szükségtéhez a szükségtéhez. Mél, lesz, hogy ezt a szükségtéhez, egy szükségtéhez, szükségtéhez, szükségtéhez. Szóval, helyetben, és ez a szükségtéhez, hagynáltal a szükségtéhez, hogy szükségtéhez, hagynáltal a szükségtéhez, ez a szükségtéhez, hogy helyettéhez, Szóval, helyetéhez egyéhez, tehát lehetőségem, és kétéhez, gondolod, és mert a szükségtéhez, Megét tetejénél a fewány szégoly volt 기�. B séki borárom volt az a kemi és choosesvár requestinga. Az ez egyő f කකint most izná Igen, és nért szét is voltani. A K Aатn Quem beer�ét blya. Vozni az elöldben Droㅋㅋ Egyébben! De azони z best sezt mi valnightos protectedーーum. Dit vagy a lifadról, Budol gadus köz jobban egyszerker�� Doctor Az Probably Csos 디자, és hogyan es interactionsztatok, és én létхől és uramon knitned az kőt LOL-cs texti,igglesysaz k교皆az motorcycleja dan kはいályévels összer is suffiakra. D壌án hád波fontra a Jeju nem kuészt gotozhratokcom és aztánél spávolgolunk报j记igig runächstognak. Uramon, és ez a zsakom előbbőkor amelyvert自由 luego volt, De hát, a c Belet案绚esrebbe gadget a zsははáll épóanban múz Withvふètre, Kár� backgőt de tőködk. Csana, para handeutban, nyúrni, Z sign Kát dil代álok Bergát helyezény közben akarod歨ján, Ők Media Flex Engineer És ebben bagdanánna szírt alternative s Oppanigerben egy agyúraúb background. És ebben ez a videon geben Git tie inaugus indírtanok. rendent nem tudnak egy előre nők kiv vállítani Wentad nőszok tárta és a numerous k아忚 acán nyiljt lotsu 1998- Però Flower is s menjénni a suggagod valahelyben az Boda ft jobban és a Most Állókanyだけ Eurás, egy马 mindfulness És hé take maintenance, aki sz訊uel is fejevestorm hold ilyen vele wondersni, mint eventek temperaturesa slicingke商 nincs kőtt narcissonion. Egyeket vennem igazott lepett voná�ление. A gyésteigうんő al好不好 okhogy tollón. A Surc d fire internet. E nemtonos győmerazン az aúny. Készülből minden szer attacham impairment.<|ko|><|transcribe|> Egy kurkk gadgets matul, menken meuly életek. TouBLOLES, skinny intellectual olyan maradt MUSIC nem réped Smack Developerabad 10örük a chor mindenwasz és anxietyz overflavored a bizat lak campaign 토zom te nem szereknálonyom. Ákel olyan, mert visszahatat anything ha bullying-t adikㄒénű a berbeMATIemption kétják, Azt atta, akik Manager Jobbudotgyоеbb. Tikként ha temedében íddák weighszelt, hogy k steht elpáltva és頭ból asímmit зöve. Ez ötös ezt ilkak addak állunk fel. Elfésztelj, hogy k affects és midensz szüggetek falarladi. S dondeg leszek el, és a másodást amit a tegára amikad a helykem a csolska de elpáöm a toucha todo novot sen Kagaltase, B Jahrke энny is előszk fellowshiposztani kezdettem implicálom jövára seconds. Ruthendra jönni észt be Hello Diós-tema érimi segítsบsan. Használokю, oképp egy konc fury össze a chagy rö stevedek a sizzi guessent, decsonyítani elfizettem és isれ solo, sociale Thermozikogdal el a száredes nämöt. ez az élő登 testésés állj Beit Heláprandalottul. Ez a belőbre anything oké positivágot assistants jár akad a nap élet a installation ésですban này is az marketing hedge- V fish Intellikuis tempány només olyan narλο amåli, regilen van. Egy rend寂 решáj sem ezt emokáta a tele aability a valamintrawak így ü jazz Angels ennek a tőle Van szépen polания illetve, AGUSZON TÜH dobrykoz rodznek egyre, pußelf PALSCHA,asisül, árom, antjátott felérretي душélye és megszörégyek lesz késztelet is. Zamb анágyi lesz egyszer is fejSteve elő które z往á taş chefs�yek bil Birmingham ke youngsters visszom a viz highest abra, mennyi a f Soul Driving Car háockeystepok, 128,ém azul, felészégi ilyen, apre előbb előre, fjachére f threatsér partic hypotheticalontja hiszeulv specialized, federal érleek, v Weblo, v ChooseBi autожноtságy, Halást a college-aguegyörcságalina alienságok meg helyőz pick up caட���. Seb竡ö推шeszer, kён Persz거� 독ty Bit madness uploading, akar el10 halára�. Ha az termésatnyi kis former sleep álljánkyle BügyeED ainda hagy September helyüzzéeek, gyoreszőz ليkom az Álihogat Mün Cohen. Tenerüzi may fold atdv spinélyekét háraszúlágikat, r knobznan visszer, dysényiébb niño takes sul kurázi az érdekű a indul. az egy polis is tudod l köretben van vintage�. Mi a Százkorszikali salsa módja egyszeretelőt loading is. Tudod teszeket kisesztik szállja a pedig maga a megaülyénizni. Száll campernek a allocation rám az grain persze! Tehát ennben a százkos rendel, amilben amit a terrafel kétengedi háminis 나�státulbesondere. Valah生 is Sand továgyunk! Igen soccer!
[{"start": 0.0, "end": 7.42, "text": " draveszettem liked\u00e9r Hotel Jondajfeeedeg On a szorama"}, {"start": 7.82, "end": 0.42, "text": " onions visszaimos m\u00e1s a hagy"}, {"start": 0.42, "end": 4.82, "text": " Egyerfell\u00f3s sz\u00e1ll\u00e1s, \u00e9s mert ez a f\u0151k\u00f6k\u00e9t az Dr. Karoszsolnai-f\u00e9h\u00e9r."}, {"start": 4.82, "end": 8.92, "text": " Sziasztott, hogy a vide\u00f3, a sz\u00fcks\u00e9gt\u0151t is a sz\u00fcks\u00e9gt\u00e9hez."}, {"start": 8.92, "end": 17.32, "text": " Sz\u00f3val, \u00e9s ezt a gy\u0151re a sz\u00fcks\u00e9gt\u00e9hez, hogy mert a sz\u00fcks\u00e9gt\u00e9hez, hogy a sz\u00fcks\u00e9gt\u00e9hez a sz\u00fcks\u00e9gt\u00e9hez."}, {"start": 17.32, "end": 24.08, "text": " M\u00e9l, lesz, hogy ezt a sz\u00fcks\u00e9gt\u00e9hez, egy sz\u00fcks\u00e9gt\u00e9hez, sz\u00fcks\u00e9gt\u00e9hez, sz\u00fcks\u00e9gt\u00e9hez."}, {"start": 24.08, "end": 34.66, "text": " Sz\u00f3val, helyetben, \u00e9s ez a sz\u00fcks\u00e9gt\u00e9hez, hagyn\u00e1ltal a sz\u00fcks\u00e9gt\u00e9hez, hogy sz\u00fcks\u00e9gt\u00e9hez, hagyn\u00e1ltal a sz\u00fcks\u00e9gt\u00e9hez, ez a sz\u00fcks\u00e9gt\u00e9hez, hogy helyett\u00e9hez,"}, {"start": 34.66, "end": 40.34, "text": " Sz\u00f3val, helyet\u00e9hez egy\u00e9hez, teh\u00e1t lehet\u0151s\u00e9gem, \u00e9s k\u00e9t\u00e9hez, gondolod, \u00e9s mert a sz\u00fcks\u00e9gt\u00e9hez,"}, {"start": 40.34, "end": 43.02, "text": " Meg\u00e9t tetej\u00e9n\u00e9l a few\u00e1ny sz\u00e9goly volt \uae30\ufffd."}, {"start": 43.040000000000006, "end": 48.0, "text": " B s\u00e9ki bor\u00e1rom volt az a kemi \u00e9s choosesv\u00e1r requestinga."}, {"start": 48.14, "end": 62.800000000000004, "text": " Az ez egy\u0151 f \u0d9a\u0d9aint most izn\u00e1"}, {"start": 62.8, "end": 69.08, "text": " Igen, \u00e9s n\u00e9rt sz\u00e9t is voltani."}, {"start": 69.06, "end": 83.74, "text": " A K"}, {"start": 83.74, "end": 86.6, "text": " A\u0430\u0442n Quem beer\ufffd\u00e9t blya."}, {"start": 88.82, "end": 92.5, "text": " Vozni az el\u00f6ldben Dro\u314b\u314b"}, {"start": 95.32, "end": 97.38, "text": " Egy\u00e9bben!"}, {"start": 97.56, "end": 100.22, "text": " De az\u043e\u043d\u0438 z best sezt mi valnightos protected\u30fc\u30fcum."}, {"start": 100.86, "end": 111.91999999999999, "text": " Dit vagy a lifadr\u00f3l, Budol gadus k\u00f6z jobban"}, {"start": 111.92, "end": 114.96000000000001, "text": " egyszerker\ufffd\ufffd Doctor Az Probably Csos \ub514\uc790,"}, {"start": 115.12, "end": 116.78, "text": " \u00e9s hogyan es interactionsztatok,"}, {"start": 116.98, "end": 119.32000000000001, "text": " \u00e9s \u00e9n l\u00e9t\u0445\u0151l \u00e9s uramon knitned az k\u0151t LOL-cs texti,"}, {"start": 119.5, "end": 123.6, "text": "igglesysaz k\uad50\u7686az motorcycleja dan k\u306f\u3044\u00e1ly\u00e9vels \u00f6sszer is suffiakra."}, {"start": 123.72, "end": 115.26, "text": " D\u58cc\u00e1n h\u00e1d"}, {"start": 121.72, "end": 129.86, "text": "\u6ce2fontra a Jeju nem ku\u00e9szt gotozhratokcom \u00e9s azt\u00e1n\u00e9l sp\u00e1volgolunk\u62a5j\u8bb0igig run\u00e4chstognak."}, {"start": 130.96, "end": 131.76, "text": " Uramon,"}, {"start": 131.92000000000002, "end": 137.42000000000002, "text": " \u00e9s ez a zsakom el\u0151bb\u0151kor amelyvert\u81ea\u7531 luego volt,"}, {"start": 137.42, "end": 142.92, "text": " De h\u00e1t, a c Belet\u6848\u7edaesrebbe gadget a zs\u306f\u306f\u00e1ll \u00e9p\u00f3anban m\u00faz Withv\u3075\u00e8tre,"}, {"start": 142.92, "end": 145.51999999999998, "text": " K\u00e1r\ufffd backg\u0151t de t\u0151k\u00f6dk."}, {"start": 145.51999999999998, "end": 148.2, "text": " Csana, para handeutban, ny\u00farni,"}, {"start": 148.2, "end": 141.54, "text": " Z sign"}, {"start": 151.14, "end": 156.22, "text": " K\u00e1t dil\u4ee3\u00e1lok Berg\u00e1t helyez\u00e9ny k\u00f6zben"}, {"start": 159.57999999999998, "end": 161.39999999999998, "text": " akarod\u6b68j\u00e1n,"}, {"start": 161.42, "end": 164.39999999999998, "text": " \u0150k Media Flex Engineer"}, {"start": 164.4, "end": 169.70000000000002, "text": " \u00c9s ebben bagdan\u00e1nna sz\u00edrt alternative s Oppanigerben egy agy\u00fara\u00fab background."}, {"start": 169.84, "end": 172.70000000000002, "text": " \u00c9s ebben ez a videon geben Git tie inaugus ind\u00edrtanok."}, {"start": 173.6, "end": 179.1, "text": " rendent nem tudnak egy el\u0151re n\u0151k kiv v\u00e1ll\u00edtani Wentad n\u0151szok t\u00e1rta \u00e9s a numerous"}, {"start": 179.4, "end": 181.54000000000002, "text": " k\uc544\u5fda ac\u00e1n nyiljt lotsu 1998- Per\u00f2 Flower"}, {"start": 181.74, "end": 165.96, "text": " is s menj\u00e9nni a suggagod valahelyben az Boda ft jobban \u00e9s a"}, {"start": 189.76, "end": 189.98000000000002, "text": " Most \u00c1ll\u00f3kany\u3060\u3051"}, {"start": 190.18, "end": 192.86, "text": " Eur\u00e1s, egy\u9a6c mindfulness"}, {"start": 192.86, "end": 198.48000000000002, "text": " \u00c9s h\u00e9 take maintenance, aki sz\u8a0auel is fejevestorm hold ilyen vele wondersni,"}, {"start": 198.48000000000002, "end": 203.42000000000002, "text": " mint eventek temperaturesa slicingke\u5546 nincs k\u0151tt narcissonion."}, {"start": 203.42000000000002, "end": 206.06, "text": " Egyeket vennem igazott lepett von\u00e1\ufffd\u043b\u0435\u043d\u0438\u0435."}, {"start": 206.06, "end": 210.16000000000003, "text": " A gy\u00e9steig\u3046\u3093\u0151 al\u597d\u4e0d\u597d okhogy toll\u00f3n."}, {"start": 210.16000000000003, "end": 212.54000000000002, "text": " A Surc d fire internet."}, {"start": 212.54000000000002, "end": 215.04000000000002, "text": " E nemtonos gy\u0151meraz\u30f3 az a\u00fany."}, {"start": 215.04000000000002, "end": 218.60000000000002, "text": " K\u00e9sz\u00fclb\u0151l minden szer attacham impairment."}, {"start": 209.66000000000003, "end": 220.5, "text": " Egy kurkk gadgets matul, menken meuly \u00e9letek."}, {"start": 220.5, "end": 230.44, "text": " TouBLOLES, skinny intellectual olyan maradt MUSIC nem r\u00e9ped Smack Developerabad 10\u00f6r\u00fck a chor mindenwasz \u00e9s anxietyz overflavored a bizat lak campaign \ud1a0zom te nem szerekn\u00e1lonyom."}, {"start": 230.44, "end": 241.12, "text": " \u00c1kel olyan, mert visszahatat anything ha bullying-t adik\u3112\u00e9n\u0171 a berbeMATIemption k\u00e9tj\u00e1k,"}, {"start": 241.12, "end": 245.48000000000002, "text": " Azt atta, akik Manager Jobbudotgy\u043e\u0435bb."}, {"start": 245.62, "end": 265.74, "text": " Tikk\u00e9nt ha temed\u00e9ben \u00eddd\u00e1"}, {"start": 265.74, "end": 269.74, "text": "k weighszelt, hogy k steht elp\u00e1ltva \u00e9s\u982db\u00f3l as\u00edmmit \u0437\u00f6ve."}, {"start": 270.72, "end": 272.76, "text": " Ez \u00f6t\u00f6s ezt ilkak addak \u00e1llunk fel."}, {"start": 274.24, "end": 277.04, "text": " Elf\u00e9sztelj, hogy k affects \u00e9s midensz sz\u00fcggetek falarladi."}, {"start": 280.94, "end": 287.98, "text": " S dondeg leszek el, \u00e9s a m\u00e1sod\u00e1st amit a teg\u00e1ra amikad a helykem a csolska"}, {"start": 290.88, "end": 293.36, "text": " de elp\u00e1\u00f6m a toucha todo novot sen Kagaltase,"}, {"start": 293.36, "end": 297.38, "text": " B Jahrke \u044d\u043dny is el\u0151szk fellowshiposztani kezdettem implic\u00e1lom j\u00f6v\u00e1ra seconds."}, {"start": 297.58000000000004, "end": 304.16, "text": " Ruthendra j\u00f6nni \u00e9szt be Hello Di\u00f3s-tema \u00e9rimi seg\u00edts\u0e1asan."}, {"start": 304.38, "end": 310.36, "text": " Haszn\u00e1lok\u044e, ok\u00e9pp egy konc fury \u00f6ssze a chagy r\u00f6 stevedek a sizzi guessent,"}, {"start": 310.56, "end": 313.08000000000004, "text": " decsony\u00edtani elfizettem \u00e9s is\u308c solo,"}, {"start": 313.38, "end": 316.86, "text": " sociale Thermozikogdal el a sz\u00e1redes n\u00e4m\u00f6t."}, {"start": 316.86, "end": 319.54, "text": " ez az \u00e9l\u0151\u767b test\u00e9s\u00e9s \u00e1llj Beit Hel\u00e1prandalottul."}, {"start": 319.98, "end": 318.72, "text": " Ez a bel\u0151bre anything ok\u00e9 positiv\u00e1got assistants"}, {"start": 322.32, "end": 325.54, "text": " j\u00e1r akad a nap \u00e9let a installation"}, {"start": 325.52000000000004, "end": 318.76, "text": " \u00e9s\u3067\u3059ban n\u00e0y is az marketing hedge- V fish"}, {"start": 326.92, "end": 328.62, "text": " Intellikuis"}, {"start": 328.98, "end": 326.46000000000004, "text": " temp\u00e1ny nom\u00e9s olyan nar\u03bb\u03bf"}, {"start": 326.74, "end": 329.76, "text": " am\u00e5li, regilen van."}, {"start": 330.3, "end": 324.76, "text": " Egy rend\u5bc2 \u0440\u0435\u0448\u00e1j sem"}, {"start": 332.86, "end": 333.92, "text": " ezt emok\u00e1ta"}, {"start": 334.8, "end": 335.88, "text": " a tele aability"}, {"start": 337.62, "end": 338.72, "text": " a valamintrawak \u00edgy"}, {"start": 339.86, "end": 340.84000000000003, "text": " \u00fc jazz Angels"}, {"start": 341.90000000000003, "end": 341.88, "text": " ennek a t\u0151le"}, {"start": 342.8, "end": 343.56, "text": " Van"}, {"start": 344.18, "end": 337.62, "text": " sz\u00e9pen pol\u0430\u043d\u0438\u044f"}, {"start": 345.84000000000003, "end": 345.40000000000003, "text": " illetve,"}, {"start": 345.4, "end": 348.12, "text": " AGUSZON T\u00dcH dobrykoz rodznek egyre, pu\u00dfelf PALSCHA,"}, {"start": 349.09999999999997, "end": 350.26, "text": "asis\u00fcl,"}, {"start": 350.32, "end": 352.26, "text": " \u00e1rom,"}, {"start": 352.34, "end": 355.53999999999996, "text": " antj\u00e1tott fel\u00e9rret\u064a \u0434\u0443\u0448\u00e9lye \u00e9s megsz\u00f6r\u00e9gyek lesz k\u00e9sztelet is."}, {"start": 356.14, "end": 363.29999999999995, "text": " Zamb \u0430\u043d\u00e1gyi lesz egyszer is fejSteve el\u0151 kt\u00f3re z\u5f80\u00e1 ta\u015f chefs\ufffdyek bil Birmingham ke youngsters"}, {"start": 363.29999999999995, "end": 365.29999999999995, "text": " visszom a viz highest abra,"}, {"start": 365.4, "end": 367.58, "text": " mennyi a f Soul Driving Car h\u00e1ockeystepok,"}, {"start": 367.96, "end": 368.97999999999996, "text": " 128,"}, {"start": 369.14, "end": 370.03999999999996, "text": "\u00e9m azul,"}, {"start": 370.03999999999996, "end": 371.4, "text": " fel\u00e9sz\u00e9gi ilyen,"}, {"start": 371.53999999999996, "end": 373.4, "text": " apre el\u0151bb el\u0151re,"}, {"start": 373.58, "end": 371.32, "text": " fjach\u00e9re"}, {"start": 371.65999999999997, "end": 356.34, "text": " f threats\u00e9r"}, {"start": 373.38, "end": 374.82, "text": " partic hypotheticalontja hiszeulv specialized,"}, {"start": 374.82, "end": 399.3, "text": " federal \u00e9rleek, v Weblo, v ChooseBi aut\u043e\u0436\u043d\u043ets\u00e1gy,"}, {"start": 399.3, "end": 407.32, "text": " Hal\u00e1st a college-aguegy\u00f6rcs\u00e1galina aliens\u00e1gok meg hely\u0151z pick up ca\u0b9f\ufffd\ufffd\ufffd."}, {"start": 407.32, "end": 410.6, "text": " Seb\u7ae1\u00f6\u63a8\u0448eszer, k\u0451\u043d Persz\uac70\ufffd \ub3c5ty Bit madness uploading,"}, {"start": 410.6, "end": 412.90000000000003, "text": " akar el10 hal\u00e1ra\ufffd."}, {"start": 412.90000000000003, "end": 400.62, "text": " Ha az term\u00e9satnyi kis former sleep \u00e1llj\u00e1nkyle B\u00fcgyeED ainda hagy September"}, {"start": 400.62, "end": 418.62, "text": " hely\u00fczz\u00e9eek, gyoresz\u0151z \u0644\u064akom az \u00c1lihogat M\u00fcn Cohen."}, {"start": 418.62, "end": 423.78000000000003, "text": " Tener\u00fczi may fold atdv spin\u00e9lyek\u00e9t h\u00e1rasz\u00fal\u00e1gikat,"}, {"start": 423.78000000000003, "end": 425.82, "text": " r knobznan visszer,"}, {"start": 425.82, "end": 428.58000000000004, "text": " dys\u00e9nyi\u00e9bb ni\u00f1o takes sul kur\u00e1zi az \u00e9rdek\u0171 a indul."}, {"start": 428.58, "end": 433.96, "text": " az egy polis is tudod l k\u00f6retben van vintage\ufffd."}, {"start": 433.96, "end": 437.28, "text": " Mi a Sz\u00e1zkorszikali salsa m\u00f3dja egyszeretel\u0151t loading is."}, {"start": 437.28, "end": 442.62, "text": " Tudod teszeket kisesztik sz\u00e1llja a pedig maga a mega\u00fcly\u00e9nizni."}, {"start": 442.62, "end": 444.32, "text": " Sz\u00e1ll campernek a allocation r\u00e1m az grain persze!"}, {"start": 444.32, "end": 449.7, "text": " Teh\u00e1t ennben a sz\u00e1zkos rendel, amilben amit a terrafel k\u00e9tengedi h\u00e1minis \ub098\ufffdst\u00e1tulbesondere."}, {"start": 449.7, "end": 451.47999999999996, "text": " Valah\u751f is Sand tov\u00e1gyunk!"}, {"start": 451.48, "end": 453.48, "text": " Igen soccer!"}]
Two Minute Papers
https://www.youtube.com/watch?v=zxyZSxnTrZs
DeepMind’s New AI Finally Enters The Real World! 🤖
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "MuZero with Self-competition for Rate Control in VP9 Video Compression" is available here: https://deepmind.com/blog/article/MuZeros-first-step-from-research-into-the-real-world https://storage.googleapis.com/deepmind-media/MuZero/MuZero%20with%20self-competition.pdf deepmind https://arxiv.org/abs/2202.06626 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepMind #MuZero
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karojona Ifeher. Finally, today DeepMind's amazing AI, MewZero, that plays chess and other games has now finally entered the real world and has learned to solve important real world problems. This is a reinforcement learning technique that works really well on games. Why? Well, in chess, go and starcraft, the controls are clear. We use the mouse to move our units around or choose where to move our pieces. And the score is also quite clear. We get rewarded if we win the game. That is going to be our score. To say that these words really well would be an understatement. DeepMind's MewZero is one of the best in the world in chess, go and starcraft too as well. But one important question still remains. Of course, they did not create this AI to play video games. They created it to be able to become a general purpose AI that can solve not just games, but many problems. The games are just used as an excellent testbed for this AI. So, what else can it do? Well, finally, here it is. Hold on to your papers because scientists at DeepMind decided to start using the MewZero AI to create a real solution to a very important problem, video compression. And here comes the twist. They said, let's imagine that video compression is a video game. Okay, that's crazy. But let's accept it for now. But then, two questions. What are the controls and what is the score? How do we know if we won video compression? Well, the video game controller in our hand will be choosing the parameters of the video encoder for each frame. Okay, but there needs to be a score. So, what is the score here? How do we win? Well, we win if we are able to choose the parameters such that the quality of the output video is as good as the previous compression algorithms. But the size of the video is smaller. The smaller the output video, the better. That is going to be our score. And it also uses self-competition, which is now a popular concept in video game AI's. This means that the AI plays against previous versions of itself and we measure its improvement by it being able to defeat these previous versions. If it can do that reliably, we can conclude that yes, the AI is indeed improving. This concept works on boxing, playing catch, stockraft, and I wonder how this would work for video compression. Well, let's see. Let's immediately drop it into deep waters. Yes, we are going to test this against a mature state of the art video compression algorithm that you are likely already using this very moment as you are watching this on YouTube. Well, good luck little AI. But I'll be honest, there is not much hope here. These traditional video compression algorithms are a culmination of decades of ingenious human research. Can a newcomer AI be that I am not sure. And now hold on to your papers and let's see together. How did it go? So a 4% difference. A learning based algorithm that is just 4% worse than decades of human innovation, that is great. But wait a second, it's actually not worse. Can it be that yes, it is not 4% worse, it is even 4% better. Holy matter of papers, that is absolutely incredible. Yes, this is the corner of the internet where we get super excited by a 4% better solution and understand why that matters a great deal. Welcome to 2 minute papers. But wait, we are experienced fellow scholars over here, we know that it is easy to be better by 4% in size at the cost of decreased quality. But having the same quality and saving 4% is insanely difficult. So which one is it? Let's look together. I am flicking between the state of the art and the new technique and yes, my goodness, the results really speak for themselves. So let's look a bit under the hood and see some more about the decisions the AI is making. Whoa, that is really cool. So what is this? Here we see the scores for the previous technique and the new AI and here they appear to be making similar decisions on this cover song video, but the AI makes somewhat better decisions overall. That is very cool. But look at that. In the second half of this gaming video, Mew Zero makes vastly different and vastly better decisions. I love it. And to have a first crack at such a mature problem and managed to improve it immediately, that is almost completely unheard of. Yet they have done it with protein folding and now they seem to have done it for video compression too. Bravo deep mind. And note the meaning in the magnitude of the difference here. Open AI's Dolly 2 was this much better than Dolly 1. That's not 4% better. If that was a percentage, this would be several hundred percent better. So why get so excited about 4%. Well, the key is that 3 to 4% more compression is incredible given how polished the state of the art techniques are. VP9 compressors are not some first crack at the problem. No, no. This is a mature field with decades of experience where every percent of improvement requires blood, papers, and tears and of course lots of compute and memory. And this is just the first crack at the problem for deep mind and we get not 1%, but 4% essentially for free. That is absolutely amazing. My mind is blown by this result. Wow. And I also wanted to thank you for watching this video. I truly love talking about these amazing research papers and I am really honored to have so many of you fellow scholars who are here every episode enjoying these incredible works with me. It really means a lot. Every now and then I have to pinch myself to make sure that I really get to do this every day. Absolutely amazing. Thank you so much. So what do you think? What else could this be useful for? What do you expect to happen? A couple more papers down the line. Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Cohear AI. Cohear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data whether it's text from customer service requests, legal contracts or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns or shipping or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Cohear.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.16, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karojona Ifeher."}, {"start": 5.16, "end": 14.0, "text": " Finally, today DeepMind's amazing AI, MewZero, that plays chess and other games has now"}, {"start": 14.0, "end": 21.6, "text": " finally entered the real world and has learned to solve important real world problems."}, {"start": 21.6, "end": 27.2, "text": " This is a reinforcement learning technique that works really well on games."}, {"start": 27.2, "end": 33.36, "text": " Why? Well, in chess, go and starcraft, the controls are clear."}, {"start": 33.36, "end": 39.44, "text": " We use the mouse to move our units around or choose where to move our pieces."}, {"start": 39.44, "end": 45.44, "text": " And the score is also quite clear. We get rewarded if we win the game."}, {"start": 45.44, "end": 52.8, "text": " That is going to be our score. To say that these words really well would be an understatement."}, {"start": 52.8, "end": 60.64, "text": " DeepMind's MewZero is one of the best in the world in chess, go and starcraft too as well."}, {"start": 60.64, "end": 68.96, "text": " But one important question still remains. Of course, they did not create this AI to play video games."}, {"start": 68.96, "end": 76.08, "text": " They created it to be able to become a general purpose AI that can solve not just games,"}, {"start": 76.08, "end": 82.96, "text": " but many problems. The games are just used as an excellent testbed for this AI."}, {"start": 82.96, "end": 89.36, "text": " So, what else can it do? Well, finally, here it is. Hold on to your papers"}, {"start": 89.36, "end": 97.12, "text": " because scientists at DeepMind decided to start using the MewZero AI to create a real solution"}, {"start": 97.12, "end": 103.44, "text": " to a very important problem, video compression. And here comes the twist."}, {"start": 103.44, "end": 108.32, "text": " They said, let's imagine that video compression is a video game."}, {"start": 109.12, "end": 115.03999999999999, "text": " Okay, that's crazy. But let's accept it for now. But then, two questions."}, {"start": 115.84, "end": 122.72, "text": " What are the controls and what is the score? How do we know if we won video compression?"}, {"start": 123.36, "end": 129.84, "text": " Well, the video game controller in our hand will be choosing the parameters of the video encoder"}, {"start": 129.84, "end": 136.88, "text": " for each frame. Okay, but there needs to be a score. So, what is the score here?"}, {"start": 137.44, "end": 144.32, "text": " How do we win? Well, we win if we are able to choose the parameters such that the quality"}, {"start": 144.32, "end": 152.08, "text": " of the output video is as good as the previous compression algorithms. But the size of the video"}, {"start": 152.08, "end": 160.16000000000003, "text": " is smaller. The smaller the output video, the better. That is going to be our score. And it also"}, {"start": 160.16000000000003, "end": 168.0, "text": " uses self-competition, which is now a popular concept in video game AI's. This means that the AI"}, {"start": 168.0, "end": 174.88000000000002, "text": " plays against previous versions of itself and we measure its improvement by it being able to"}, {"start": 174.88000000000002, "end": 181.12, "text": " defeat these previous versions. If it can do that reliably, we can conclude that yes,"}, {"start": 181.12, "end": 188.24, "text": " the AI is indeed improving. This concept works on boxing, playing catch, stockraft,"}, {"start": 188.96, "end": 196.8, "text": " and I wonder how this would work for video compression. Well, let's see. Let's immediately drop"}, {"start": 196.8, "end": 204.24, "text": " it into deep waters. Yes, we are going to test this against a mature state of the art video compression"}, {"start": 204.24, "end": 211.44, "text": " algorithm that you are likely already using this very moment as you are watching this on YouTube."}, {"start": 211.84, "end": 219.44, "text": " Well, good luck little AI. But I'll be honest, there is not much hope here. These traditional video"}, {"start": 219.44, "end": 227.12, "text": " compression algorithms are a culmination of decades of ingenious human research. Can a newcomer AI"}, {"start": 227.12, "end": 235.28, "text": " be that I am not sure. And now hold on to your papers and let's see together. How did it go?"}, {"start": 236.0, "end": 244.16, "text": " So a 4% difference. A learning based algorithm that is just 4% worse than decades of human"}, {"start": 244.16, "end": 253.76, "text": " innovation, that is great. But wait a second, it's actually not worse. Can it be that yes,"}, {"start": 253.76, "end": 263.92, "text": " it is not 4% worse, it is even 4% better. Holy matter of papers, that is absolutely incredible."}, {"start": 264.48, "end": 271.52, "text": " Yes, this is the corner of the internet where we get super excited by a 4% better solution"}, {"start": 271.52, "end": 278.8, "text": " and understand why that matters a great deal. Welcome to 2 minute papers. But wait,"}, {"start": 278.8, "end": 284.64, "text": " we are experienced fellow scholars over here, we know that it is easy to be better by 4%"}, {"start": 284.64, "end": 293.36, "text": " in size at the cost of decreased quality. But having the same quality and saving 4% is insanely"}, {"start": 293.36, "end": 301.12, "text": " difficult. So which one is it? Let's look together. I am flicking between the state of the art"}, {"start": 301.12, "end": 310.48, "text": " and the new technique and yes, my goodness, the results really speak for themselves. So let's"}, {"start": 310.48, "end": 319.68, "text": " look a bit under the hood and see some more about the decisions the AI is making. Whoa, that is"}, {"start": 319.68, "end": 327.84000000000003, "text": " really cool. So what is this? Here we see the scores for the previous technique and the new AI"}, {"start": 327.84, "end": 336.23999999999995, "text": " and here they appear to be making similar decisions on this cover song video, but the AI makes"}, {"start": 336.23999999999995, "end": 345.03999999999996, "text": " somewhat better decisions overall. That is very cool. But look at that. In the second half of"}, {"start": 345.03999999999996, "end": 354.55999999999995, "text": " this gaming video, Mew Zero makes vastly different and vastly better decisions. I love it. And to have"}, {"start": 354.56, "end": 361.36, "text": " a first crack at such a mature problem and managed to improve it immediately, that is almost"}, {"start": 361.36, "end": 367.92, "text": " completely unheard of. Yet they have done it with protein folding and now they seem to have done"}, {"start": 367.92, "end": 375.76, "text": " it for video compression too. Bravo deep mind. And note the meaning in the magnitude of the difference"}, {"start": 375.76, "end": 383.68, "text": " here. Open AI's Dolly 2 was this much better than Dolly 1. That's not 4% better. If that was a"}, {"start": 383.68, "end": 391.2, "text": " percentage, this would be several hundred percent better. So why get so excited about 4%."}, {"start": 391.92, "end": 399.36, "text": " Well, the key is that 3 to 4% more compression is incredible given how polished the state of the"}, {"start": 399.36, "end": 406.4, "text": " art techniques are. VP9 compressors are not some first crack at the problem. No, no. This is a"}, {"start": 406.4, "end": 413.44, "text": " mature field with decades of experience where every percent of improvement requires blood, papers,"}, {"start": 413.44, "end": 420.32, "text": " and tears and of course lots of compute and memory. And this is just the first crack at the"}, {"start": 420.32, "end": 429.84, "text": " problem for deep mind and we get not 1%, but 4% essentially for free. That is absolutely amazing."}, {"start": 429.84, "end": 437.76, "text": " My mind is blown by this result. Wow. And I also wanted to thank you for watching this video."}, {"start": 437.76, "end": 444.47999999999996, "text": " I truly love talking about these amazing research papers and I am really honored to have so many"}, {"start": 444.47999999999996, "end": 451.2, "text": " of you fellow scholars who are here every episode enjoying these incredible works with me. It really"}, {"start": 451.2, "end": 457.44, "text": " means a lot. Every now and then I have to pinch myself to make sure that I really get to do this"}, {"start": 457.44, "end": 465.36, "text": " every day. Absolutely amazing. Thank you so much. So what do you think? What else could this be"}, {"start": 465.36, "end": 471.2, "text": " useful for? What do you expect to happen? A couple more papers down the line. Please let me know"}, {"start": 471.2, "end": 477.92, "text": " in the comments below. I'd love to hear your thoughts. This episode has been supported by Cohear AI."}, {"start": 477.92, "end": 484.64, "text": " Cohear builds large language models and makes them available through an API so businesses can add"}, {"start": 484.64, "end": 491.12, "text": " advanced language understanding to their system or app quickly with just one line of code."}, {"start": 491.68, "end": 497.59999999999997, "text": " You can use your own data whether it's text from customer service requests, legal contracts"}, {"start": 497.59999999999997, "end": 504.88, "text": " or social media posts to create your own custom models to understand text or even generated."}, {"start": 505.52, "end": 511.03999999999996, "text": " For instance, it can be used to automatically determine whether your messages are about your"}, {"start": 511.04, "end": 519.04, "text": " business hours, returns or shipping or it can be used to generate a list of possible sentences"}, {"start": 519.04, "end": 526.08, "text": " you can use for your product descriptions. Make sure to go to Cohear.ai slash papers or click"}, {"start": 526.08, "end": 532.08, "text": " the link in the video description and give it a try today. It's super easy to use. Thanks for"}, {"start": 532.08, "end": 542.08, "text": " watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YNY_ZEuDncM
This New AI is Photoshop For Your Hair! 🧔
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (thank you Soumik Rakshit!): http://wandb.me/barbershop 📝 The paper "Barbershop: GAN-based Image Compositing using Segmentation Masks" is available here: https://zpdesu.github.io/Barbershop/ https://github.com/ZPdesu/Barbershop ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. AI Research is improving at such a rapid pace that today we cannot only create virtual humans, but even exert some artistic control over how these virtual humans should look. But today we can do something even cooler. What can be even cooler than that? Well have you ever seen someone with a really cool haircut and wondered what you would look like with it? Would it be ridiculous or would you be able to pull it off? Well hold on to your papers because you don't have to wonder anymore. With this new AI you can visualize that yourself. Hmm okay, but wait a second. Previous methods were already able to do that, so what is new here? Well yes, but look at the quality of the results. In some cases it is quite easy to find out that these are synthetic images while for other cases so many of the fine details of the face get lost in the process that the end result is not that convincing. So is that it is the hairstyle synthesis dream dead? Well, if you have been holding on to your papers now squeeze that paper and have a look at this new technique. And wow, my goodness, just look at that. This is a huge improvement, just one more paper down the line. And note that these previous methods were not some ancient techniques. No no, these are from just one and two years ago. So much improvement in so little time. I love it. Now let's see what more it can do. Look, we can choose the hairstyle we are looking for and yes, it doesn't just give us one image to work with. No, it gradually morphs our hair into the desired target shape. That is fantastic because even if we don't like the final hairstyle, one of those intermediate images may give us just what we are looking for. I love it. And it has one more fantastic property. Look, most of the details of the face remain in the final results. Whew, we don't even have to morph into a different person to be able to pull this off. Now, interestingly, look, the eye color may change, but the rest seems to be very close to the original. But perfect, but very close. And did you notice? Yes, it gets better. Way better. We can even choose the structure. This would be the hairstyle which can be one image and the appearance this is more like the hair color. So does that mean that? Yes, we can even choose them separately. That is, take two hair styles and fuse them together. Now have a look at this too. This one is an exclusive example to the best of my knowledge you can only see this here on two-minute papers. A huge thank you to the authors for creating these just for us. And I must say that we could stop on any of these images and I am not sure if I would be able to tell that it is synthetic. I may have a hunch due to how flamboyant some of these hairstyles are, but not due to quality concerns, that's for sure. Even if we subject these images to closer inspection, the visual artifacts of the previous techniques around the boundaries of the hair seem to be completely gone. And I wonder what this will be able to do just a couple more papers down the line. What do you think? I'd love to know, let me know in the comments below. What you see here is a report of this exact paper we have talked about which was made by Wates and Biasis. I put a link to it in the description. Make sure to have a look, I think it helps you understand this paper better. Wates and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that Wates and Biasis is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 11.44, "text": " AI Research is improving at such a rapid pace that today we cannot only create virtual"}, {"start": 11.44, "end": 19.240000000000002, "text": " humans, but even exert some artistic control over how these virtual humans should look."}, {"start": 19.240000000000002, "end": 23.52, "text": " But today we can do something even cooler."}, {"start": 23.52, "end": 25.76, "text": " What can be even cooler than that?"}, {"start": 25.76, "end": 31.200000000000003, "text": " Well have you ever seen someone with a really cool haircut and wondered what you would"}, {"start": 31.200000000000003, "end": 32.64, "text": " look like with it?"}, {"start": 32.64, "end": 36.84, "text": " Would it be ridiculous or would you be able to pull it off?"}, {"start": 36.84, "end": 42.0, "text": " Well hold on to your papers because you don't have to wonder anymore."}, {"start": 42.0, "end": 46.040000000000006, "text": " With this new AI you can visualize that yourself."}, {"start": 46.040000000000006, "end": 50.32000000000001, "text": " Hmm okay, but wait a second."}, {"start": 50.32000000000001, "end": 55.32000000000001, "text": " Previous methods were already able to do that, so what is new here?"}, {"start": 55.32, "end": 59.88, "text": " Well yes, but look at the quality of the results."}, {"start": 59.88, "end": 66.24, "text": " In some cases it is quite easy to find out that these are synthetic images while for"}, {"start": 66.24, "end": 72.24000000000001, "text": " other cases so many of the fine details of the face get lost in the process that the"}, {"start": 72.24000000000001, "end": 75.32, "text": " end result is not that convincing."}, {"start": 75.32, "end": 79.6, "text": " So is that it is the hairstyle synthesis dream dead?"}, {"start": 79.6, "end": 86.32, "text": " Well, if you have been holding on to your papers now squeeze that paper and have a look at"}, {"start": 86.32, "end": 88.28, "text": " this new technique."}, {"start": 88.28, "end": 93.28, "text": " And wow, my goodness, just look at that."}, {"start": 93.28, "end": 98.16, "text": " This is a huge improvement, just one more paper down the line."}, {"start": 98.16, "end": 102.6, "text": " And note that these previous methods were not some ancient techniques."}, {"start": 102.6, "end": 108.03999999999999, "text": " No no, these are from just one and two years ago."}, {"start": 108.04, "end": 111.28, "text": " So much improvement in so little time."}, {"start": 111.28, "end": 112.80000000000001, "text": " I love it."}, {"start": 112.80000000000001, "end": 115.92, "text": " Now let's see what more it can do."}, {"start": 115.92, "end": 122.2, "text": " Look, we can choose the hairstyle we are looking for and yes, it doesn't just give us"}, {"start": 122.2, "end": 124.12, "text": " one image to work with."}, {"start": 124.12, "end": 130.56, "text": " No, it gradually morphs our hair into the desired target shape."}, {"start": 130.56, "end": 136.64000000000001, "text": " That is fantastic because even if we don't like the final hairstyle, one of those intermediate"}, {"start": 136.64, "end": 140.35999999999999, "text": " images may give us just what we are looking for."}, {"start": 140.35999999999999, "end": 141.76, "text": " I love it."}, {"start": 141.76, "end": 144.95999999999998, "text": " And it has one more fantastic property."}, {"start": 144.95999999999998, "end": 150.6, "text": " Look, most of the details of the face remain in the final results."}, {"start": 150.6, "end": 156.72, "text": " Whew, we don't even have to morph into a different person to be able to pull this off."}, {"start": 156.72, "end": 164.11999999999998, "text": " Now, interestingly, look, the eye color may change, but the rest seems to be very close"}, {"start": 164.11999999999998, "end": 165.88, "text": " to the original."}, {"start": 165.88, "end": 169.16, "text": " But perfect, but very close."}, {"start": 169.16, "end": 170.72, "text": " And did you notice?"}, {"start": 170.72, "end": 173.32, "text": " Yes, it gets better."}, {"start": 173.32, "end": 174.32, "text": " Way better."}, {"start": 174.32, "end": 177.0, "text": " We can even choose the structure."}, {"start": 177.0, "end": 183.84, "text": " This would be the hairstyle which can be one image and the appearance this is more like"}, {"start": 183.84, "end": 185.35999999999999, "text": " the hair color."}, {"start": 185.35999999999999, "end": 187.44, "text": " So does that mean that?"}, {"start": 187.44, "end": 191.28, "text": " Yes, we can even choose them separately."}, {"start": 191.28, "end": 196.92, "text": " That is, take two hair styles and fuse them together."}, {"start": 196.92, "end": 199.04, "text": " Now have a look at this too."}, {"start": 199.04, "end": 204.6, "text": " This one is an exclusive example to the best of my knowledge you can only see this here"}, {"start": 204.6, "end": 206.36, "text": " on two-minute papers."}, {"start": 206.36, "end": 211.56, "text": " A huge thank you to the authors for creating these just for us."}, {"start": 211.56, "end": 217.64, "text": " And I must say that we could stop on any of these images and I am not sure if I would"}, {"start": 217.64, "end": 220.72, "text": " be able to tell that it is synthetic."}, {"start": 220.72, "end": 226.72, "text": " I may have a hunch due to how flamboyant some of these hairstyles are, but not due to"}, {"start": 226.72, "end": 229.88, "text": " quality concerns, that's for sure."}, {"start": 229.88, "end": 235.36, "text": " Even if we subject these images to closer inspection, the visual artifacts of the previous"}, {"start": 235.36, "end": 241.4, "text": " techniques around the boundaries of the hair seem to be completely gone."}, {"start": 241.4, "end": 247.32, "text": " And I wonder what this will be able to do just a couple more papers down the line."}, {"start": 247.32, "end": 248.32, "text": " What do you think?"}, {"start": 248.32, "end": 251.35999999999999, "text": " I'd love to know, let me know in the comments below."}, {"start": 251.35999999999999, "end": 256.08, "text": " What you see here is a report of this exact paper we have talked about which was made by"}, {"start": 256.08, "end": 257.56, "text": " Wates and Biasis."}, {"start": 257.56, "end": 259.76, "text": " I put a link to it in the description."}, {"start": 259.76, "end": 264.32, "text": " Make sure to have a look, I think it helps you understand this paper better."}, {"start": 264.32, "end": 269.64, "text": " Wates and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 269.64, "end": 274.76, "text": " Using their system, you can create beautiful reports like this one to explain your findings"}, {"start": 274.76, "end": 276.76, "text": " to your colleagues better."}, {"start": 276.76, "end": 283.71999999999997, "text": " It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more."}, {"start": 283.71999999999997, "end": 289.76, "text": " And the best part is that Wates and Biasis is free for all individuals, academics and"}, {"start": 289.76, "end": 291.52, "text": " open source projects."}, {"start": 291.52, "end": 297.71999999999997, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 297.71999999999997, "end": 301.44, "text": " description and you can get a free demo today."}, {"start": 301.44, "end": 312.16, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=yl1jkmF7Xug
NVIDIA's Ray Tracing AI - This is The Next Level! 🤯
❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The paper "Neural Control Variates" is available here: https://research.nvidia.com/publication/2021-01_Neural-Control-Variates https://tom94.net/data/publications/mueller20neural/interactive-viewer/ 🔆 The free light transport course is available here. You'll love it! https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/illustrations/bed-plaid-pattern-bedside-table-3700115/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia #rtx #rtxon
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to take a light simulation algorithm that doesn't work too well and infuse it with a powerful neural network. And what happens? Then, and this happens. Wow! So, what was all this about? Well, when you fire up a classical light simulation program, first we start out from, yes, absolutely nothing. And look, over time, as we simulate the path of more and more light rays, we get to know something about the scene. And slowly, over time, the image cleans up. But, there is a problem. What is the problem? Well, look, the image indeed cleans up over time. But, no one said that this process would be quick. In fact, this can take from minutes in the case of this scene to get this even days for this scene. In our earlier paper, we rendered this beautiful, but otherwise, sinfully difficult scene and it took approximately three weeks to finish. And it also took several computers running at the same time. However, these days, neural network-based learning methods are already capable of doing light transport. So, why not use them? Well, yes, they are, but they are not perfect. And we don't want an imperfect result. So, scientists at Nvidia said that this might be the perfect opportunity to use control variants. What are those? And why is this the perfect opportunity? Control variants are a way to inject our knowledge of a scene into the light transport algorithm. And here's the key. Any knowledge is useful as long as it gives us a head start, even if this knowledge is imperfect. Here is an illustration of that. What we do is that we start out using that knowledge and make up for the differences over time. Okay, so how much of a head start can we get with this? Normally, we start out from a black image and then a very noisy image. Now, hold on to your papers and let's see what this can do to help us. Wow! Look at that! This is incredible. What you're looking at is not the final rendering. This is what the algorithm knows. And instead of the blackness, we can start out from this. Goodness, it turns out that this might not have been an illustration, but the actual knowledge the AI has of the scene. So cool! Let's look at another example. In this bedroom, this will be our starting point. My goodness! Is that really possible? The bed and the floor are almost completely done. The curtain and the refractive objects are noisy, but do not worry about those for a second. This is just a starting point and it is still way better than starting out from a black image. To be able to visualize what the algorithm has learned with a little more finesse, we can also pick a point in space and learn what the world looks like from this point and how much light scatters around it. And we can even visualize what happens when we move this point around. Man, this technique has a ton of knowledge about these scenes. So once again, just to make sure we start out not from a black image, but from a learned image and now we don't have to compute all the light transport in the scene we just need to correct the differences. This is so much faster, but actually is it? Now let's look at how this helps us in practice. And that means, of course, equal time comparisons against previous light transport simulation techniques. And when it comes to comparisons, you know what I want? Yes, I want Eric Vitch's legendary scene. See, this is an insane scene where all the light is coming from not here, but the neighboring room through a door that is just slightly a jar. And the past tracer, the reference light transport simulation technique behaves as expected. Yes, we get a very noisy image because very few of the simulated rays make it to the next room. Thus, most of our computation is going to waste. This is why we get this noisy image. And let's have a look what the new method can do in the same amount of time. Wow! Are you seeing what I am seeing? This is completely out of this world. My goodness! But this was a pathological scene designed to challenge our light transport simulation algorithms. What about a more typical outdoor scene? Add tons of incoming light from every direction, and my favorite water caustics. Do we get any advantages here? Hmm, the past tracer is quite noisy. This will take quite a bit of time to clean up. Whereas the new technique, oh my, that is a clean image. How close is it to the fully converged reference image? Well, you tell me because you are already looking at that. Yes, now we are flicking between the new technique and the reference image, and I can barely tell the difference. There are some, for instance, here, but that seems to be about it. Can you tell the difference? Let me know in the comments below. Now, let's try a bathroom. Lots of shiny surfaces and specular light transport. And the results are... Look at that! There is no contest here. A huge improvement across the whole scene. And believe it or not, you still haven't seen the best results yet. Don't believe it. Now, if you have been holding onto your papers, squeeze that paper and look at the art room example here. This is an almost unusable image with classical light transport. And are you ready? Well, look at this. What in the world? This is where I fell off the chair when I was reading this paper. Absolute madness. Look, while the previous technique is barely making a dent into the problem, the new method is already just a few fireflies away from the reference. I can't believe my eyes. And this result is not just an anomaly. We can try a kitchen scene and draw similar conclusions. Let's see. Now we're talking. I am out of words. Now, despite all these amazing results, of course, not even this technique is perfect. This tourist was put inside a glass container and is a nightmare scenario for any kind of light transport simulation. The new method successfully harnesses our knowledge about the scene and accelerates the process a great deal. But once again, we get fireflies. These are going to be difficult to get rid of and will still take a fair bit of time. But my goodness, if this is supposed to be a failure case, then yes, sign me up right now. Once again, the twist here is not just to use control variants, the initial knowledge thing, because in and of itself, it is not new. I, like many others, have been experimenting with this method back in 2013, almost 10 years ago, and back then, it was nowhere near as good as this one. So, what is the twist then? The twist is to use control variants and infuse them with a modern neural network. Infusing previous techniques with powerful learning-based methods is a fantastic area of research these days. For instance, here you see an earlier result, an ancient light transport technique called radiosity. This is what it was capable of back in the day. And here is the neural network-infused version. Way better. I think this area of research is showing a ton of promise and I'm so excited to see more in this direction. So, what do you think? What would you use this for? I'd love to hear your thoughts, please let me know in the comments below. And when watching all these beautiful results, if you feel that this light transport thing is pretty cool, and you would like to learn more about it, I held a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full-light simulation program from scratch there and learn about physics, the world around us, and more. If you watch it, you will see the world differently. This video has been supported by weights and biases. Look at this, they have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is, in this forum you can share your projects, ask for advice, look for collaborators and more. Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 10.4, "text": " Today, we are going to take a light simulation algorithm that doesn't work too well"}, {"start": 10.4, "end": 14.4, "text": " and infuse it with a powerful neural network."}, {"start": 14.4, "end": 16.4, "text": " And what happens?"}, {"start": 16.4, "end": 18.8, "text": " Then, and this happens."}, {"start": 18.8, "end": 20.0, "text": " Wow!"}, {"start": 20.0, "end": 22.400000000000002, "text": " So, what was all this about?"}, {"start": 22.400000000000002, "end": 26.8, "text": " Well, when you fire up a classical light simulation program,"}, {"start": 26.8, "end": 32.0, "text": " first we start out from, yes, absolutely nothing."}, {"start": 32.0, "end": 38.0, "text": " And look, over time, as we simulate the path of more and more light rays,"}, {"start": 38.0, "end": 40.8, "text": " we get to know something about the scene."}, {"start": 40.8, "end": 45.6, "text": " And slowly, over time, the image cleans up."}, {"start": 45.6, "end": 48.400000000000006, "text": " But, there is a problem."}, {"start": 48.400000000000006, "end": 50.0, "text": " What is the problem?"}, {"start": 50.0, "end": 54.8, "text": " Well, look, the image indeed cleans up over time."}, {"start": 54.8, "end": 59.199999999999996, "text": " But, no one said that this process would be quick."}, {"start": 59.199999999999996, "end": 68.0, "text": " In fact, this can take from minutes in the case of this scene to get this even days for this scene."}, {"start": 68.0, "end": 72.6, "text": " In our earlier paper, we rendered this beautiful, but otherwise,"}, {"start": 72.6, "end": 78.4, "text": " sinfully difficult scene and it took approximately three weeks to finish."}, {"start": 78.4, "end": 83.6, "text": " And it also took several computers running at the same time."}, {"start": 83.6, "end": 87.6, "text": " However, these days, neural network-based learning methods"}, {"start": 87.6, "end": 91.6, "text": " are already capable of doing light transport."}, {"start": 91.6, "end": 93.6, "text": " So, why not use them?"}, {"start": 93.6, "end": 98.0, "text": " Well, yes, they are, but they are not perfect."}, {"start": 98.0, "end": 101.19999999999999, "text": " And we don't want an imperfect result."}, {"start": 101.19999999999999, "end": 109.19999999999999, "text": " So, scientists at Nvidia said that this might be the perfect opportunity to use control variants."}, {"start": 109.2, "end": 114.2, "text": " What are those? And why is this the perfect opportunity?"}, {"start": 114.2, "end": 121.2, "text": " Control variants are a way to inject our knowledge of a scene into the light transport algorithm."}, {"start": 121.2, "end": 127.60000000000001, "text": " And here's the key. Any knowledge is useful as long as it gives us a head start,"}, {"start": 127.60000000000001, "end": 130.8, "text": " even if this knowledge is imperfect."}, {"start": 130.8, "end": 133.2, "text": " Here is an illustration of that."}, {"start": 133.2, "end": 140.79999999999998, "text": " What we do is that we start out using that knowledge and make up for the differences over time."}, {"start": 140.79999999999998, "end": 145.6, "text": " Okay, so how much of a head start can we get with this?"}, {"start": 145.6, "end": 151.6, "text": " Normally, we start out from a black image and then a very noisy image."}, {"start": 151.6, "end": 157.2, "text": " Now, hold on to your papers and let's see what this can do to help us."}, {"start": 157.2, "end": 160.0, "text": " Wow! Look at that!"}, {"start": 160.0, "end": 164.8, "text": " This is incredible. What you're looking at is not the final rendering."}, {"start": 164.8, "end": 167.2, "text": " This is what the algorithm knows."}, {"start": 167.2, "end": 172.0, "text": " And instead of the blackness, we can start out from this."}, {"start": 172.0, "end": 176.8, "text": " Goodness, it turns out that this might not have been an illustration,"}, {"start": 176.8, "end": 181.2, "text": " but the actual knowledge the AI has of the scene."}, {"start": 181.2, "end": 185.0, "text": " So cool! Let's look at another example."}, {"start": 185.0, "end": 188.8, "text": " In this bedroom, this will be our starting point."}, {"start": 188.8, "end": 192.0, "text": " My goodness! Is that really possible?"}, {"start": 192.0, "end": 196.0, "text": " The bed and the floor are almost completely done."}, {"start": 196.0, "end": 199.60000000000002, "text": " The curtain and the refractive objects are noisy,"}, {"start": 199.60000000000002, "end": 202.8, "text": " but do not worry about those for a second."}, {"start": 202.8, "end": 209.60000000000002, "text": " This is just a starting point and it is still way better than starting out from a black image."}, {"start": 209.60000000000002, "end": 215.60000000000002, "text": " To be able to visualize what the algorithm has learned with a little more finesse,"}, {"start": 215.6, "end": 222.0, "text": " we can also pick a point in space and learn what the world looks like from this point"}, {"start": 222.0, "end": 226.0, "text": " and how much light scatters around it."}, {"start": 226.0, "end": 234.4, "text": " And we can even visualize what happens when we move this point around."}, {"start": 234.4, "end": 239.2, "text": " Man, this technique has a ton of knowledge about these scenes."}, {"start": 239.2, "end": 244.6, "text": " So once again, just to make sure we start out not from a black image,"}, {"start": 244.6, "end": 251.0, "text": " but from a learned image and now we don't have to compute all the light transport in the scene"}, {"start": 251.0, "end": 253.79999999999998, "text": " we just need to correct the differences."}, {"start": 253.79999999999998, "end": 258.6, "text": " This is so much faster, but actually is it?"}, {"start": 258.6, "end": 262.2, "text": " Now let's look at how this helps us in practice."}, {"start": 262.2, "end": 270.6, "text": " And that means, of course, equal time comparisons against previous light transport simulation techniques."}, {"start": 270.6, "end": 274.6, "text": " And when it comes to comparisons, you know what I want?"}, {"start": 274.6, "end": 278.6, "text": " Yes, I want Eric Vitch's legendary scene."}, {"start": 278.6, "end": 285.0, "text": " See, this is an insane scene where all the light is coming from not here,"}, {"start": 285.0, "end": 291.0, "text": " but the neighboring room through a door that is just slightly a jar."}, {"start": 291.0, "end": 297.40000000000003, "text": " And the past tracer, the reference light transport simulation technique behaves as expected."}, {"start": 297.4, "end": 305.0, "text": " Yes, we get a very noisy image because very few of the simulated rays make it to the next room."}, {"start": 305.0, "end": 308.59999999999997, "text": " Thus, most of our computation is going to waste."}, {"start": 308.59999999999997, "end": 311.79999999999995, "text": " This is why we get this noisy image."}, {"start": 311.79999999999995, "end": 317.0, "text": " And let's have a look what the new method can do in the same amount of time."}, {"start": 317.0, "end": 320.79999999999995, "text": " Wow! Are you seeing what I am seeing?"}, {"start": 320.79999999999995, "end": 325.2, "text": " This is completely out of this world. My goodness!"}, {"start": 325.2, "end": 332.4, "text": " But this was a pathological scene designed to challenge our light transport simulation algorithms."}, {"start": 332.4, "end": 335.59999999999997, "text": " What about a more typical outdoor scene?"}, {"start": 335.59999999999997, "end": 342.4, "text": " Add tons of incoming light from every direction, and my favorite water caustics."}, {"start": 342.4, "end": 345.2, "text": " Do we get any advantages here?"}, {"start": 345.2, "end": 348.4, "text": " Hmm, the past tracer is quite noisy."}, {"start": 348.4, "end": 351.59999999999997, "text": " This will take quite a bit of time to clean up."}, {"start": 351.6, "end": 357.20000000000005, "text": " Whereas the new technique, oh my, that is a clean image."}, {"start": 357.20000000000005, "end": 361.20000000000005, "text": " How close is it to the fully converged reference image?"}, {"start": 361.20000000000005, "end": 366.0, "text": " Well, you tell me because you are already looking at that."}, {"start": 366.0, "end": 371.20000000000005, "text": " Yes, now we are flicking between the new technique and the reference image,"}, {"start": 371.20000000000005, "end": 374.0, "text": " and I can barely tell the difference."}, {"start": 374.0, "end": 379.40000000000003, "text": " There are some, for instance, here, but that seems to be about it."}, {"start": 379.4, "end": 383.0, "text": " Can you tell the difference? Let me know in the comments below."}, {"start": 383.0, "end": 390.2, "text": " Now, let's try a bathroom. Lots of shiny surfaces and specular light transport."}, {"start": 390.2, "end": 392.59999999999997, "text": " And the results are..."}, {"start": 392.59999999999997, "end": 396.59999999999997, "text": " Look at that! There is no contest here."}, {"start": 396.59999999999997, "end": 400.2, "text": " A huge improvement across the whole scene."}, {"start": 400.2, "end": 405.2, "text": " And believe it or not, you still haven't seen the best results yet."}, {"start": 405.2, "end": 414.0, "text": " Don't believe it. Now, if you have been holding onto your papers, squeeze that paper and look at the art room example here."}, {"start": 414.0, "end": 418.8, "text": " This is an almost unusable image with classical light transport."}, {"start": 418.8, "end": 420.8, "text": " And are you ready?"}, {"start": 420.8, "end": 423.8, "text": " Well, look at this."}, {"start": 423.8, "end": 425.59999999999997, "text": " What in the world?"}, {"start": 425.59999999999997, "end": 429.59999999999997, "text": " This is where I fell off the chair when I was reading this paper."}, {"start": 429.59999999999997, "end": 431.59999999999997, "text": " Absolute madness."}, {"start": 431.6, "end": 441.8, "text": " Look, while the previous technique is barely making a dent into the problem, the new method is already just a few fireflies away from the reference."}, {"start": 441.8, "end": 444.20000000000005, "text": " I can't believe my eyes."}, {"start": 444.20000000000005, "end": 447.40000000000003, "text": " And this result is not just an anomaly."}, {"start": 447.40000000000003, "end": 451.8, "text": " We can try a kitchen scene and draw similar conclusions."}, {"start": 451.8, "end": 453.40000000000003, "text": " Let's see."}, {"start": 453.40000000000003, "end": 455.20000000000005, "text": " Now we're talking."}, {"start": 455.20000000000005, "end": 457.40000000000003, "text": " I am out of words."}, {"start": 457.4, "end": 464.2, "text": " Now, despite all these amazing results, of course, not even this technique is perfect."}, {"start": 464.2, "end": 472.4, "text": " This tourist was put inside a glass container and is a nightmare scenario for any kind of light transport simulation."}, {"start": 472.4, "end": 479.79999999999995, "text": " The new method successfully harnesses our knowledge about the scene and accelerates the process a great deal."}, {"start": 479.79999999999995, "end": 483.79999999999995, "text": " But once again, we get fireflies."}, {"start": 483.8, "end": 489.40000000000003, "text": " These are going to be difficult to get rid of and will still take a fair bit of time."}, {"start": 489.40000000000003, "end": 497.2, "text": " But my goodness, if this is supposed to be a failure case, then yes, sign me up right now."}, {"start": 497.2, "end": 506.8, "text": " Once again, the twist here is not just to use control variants, the initial knowledge thing, because in and of itself, it is not new."}, {"start": 506.8, "end": 519.2, "text": " I, like many others, have been experimenting with this method back in 2013, almost 10 years ago, and back then, it was nowhere near as good as this one."}, {"start": 519.2, "end": 521.4, "text": " So, what is the twist then?"}, {"start": 521.4, "end": 528.4, "text": " The twist is to use control variants and infuse them with a modern neural network."}, {"start": 528.4, "end": 536.2, "text": " Infusing previous techniques with powerful learning-based methods is a fantastic area of research these days."}, {"start": 536.2, "end": 544.0, "text": " For instance, here you see an earlier result, an ancient light transport technique called radiosity."}, {"start": 544.0, "end": 547.6, "text": " This is what it was capable of back in the day."}, {"start": 547.6, "end": 551.6, "text": " And here is the neural network-infused version."}, {"start": 551.6, "end": 553.0, "text": " Way better."}, {"start": 553.0, "end": 560.6, "text": " I think this area of research is showing a ton of promise and I'm so excited to see more in this direction."}, {"start": 560.6, "end": 564.4000000000001, "text": " So, what do you think? What would you use this for?"}, {"start": 564.4, "end": 568.6, "text": " I'd love to hear your thoughts, please let me know in the comments below."}, {"start": 568.6, "end": 577.4, "text": " And when watching all these beautiful results, if you feel that this light transport thing is pretty cool, and you would like to learn more about it,"}, {"start": 577.4, "end": 583.1999999999999, "text": " I held a master-level course on this topic at the Technical University of Vienna."}, {"start": 583.1999999999999, "end": 587.4, "text": " Since I was always teaching it to a handful of motivated students,"}, {"start": 587.4, "end": 594.8, "text": " I thought that the teachings shouldn't only be available for the privileged few who can afford a college education,"}, {"start": 594.8, "end": 598.6, "text": " but the teachings should be available for everyone."}, {"start": 598.6, "end": 601.8, "text": " Free education for everyone, that's what I want."}, {"start": 601.8, "end": 607.6, "text": " So, the course is available free of charge for everyone, no strings attached,"}, {"start": 607.6, "end": 611.4, "text": " so make sure to click the link in the video description to get started."}, {"start": 611.4, "end": 619.6, "text": " We write a full-light simulation program from scratch there and learn about physics, the world around us, and more."}, {"start": 619.6, "end": 622.8, "text": " If you watch it, you will see the world differently."}, {"start": 622.8, "end": 626.6, "text": " This video has been supported by weights and biases."}, {"start": 626.6, "end": 634.1999999999999, "text": " Look at this, they have a great community forum that aims to make you the best machine learning engineer you can be."}, {"start": 634.2, "end": 641.2, "text": " You see, I always get messages from you fellow scholars telling me that you have been inspired by the series,"}, {"start": 641.2, "end": 644.2, "text": " but don't really know where to start."}, {"start": 644.2, "end": 652.2, "text": " And here it is, in this forum you can share your projects, ask for advice, look for collaborators and more."}, {"start": 652.2, "end": 661.2, "text": " Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video description."}, {"start": 661.2, "end": 668.2, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better videos for you."}, {"start": 668.2, "end": 697.2, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=0YypAl8mBsY
OpenAI’s New AI Writes A Letter To Humanity! ✍️
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The post about GPT-3's Edit and Insert capabilities are available here: https://openai.com/blog/gpt-3-edit-insert/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1052010/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAI #GPT3
Dear Fellow Scholars, this is two Minute Papers with Dr. Karo Zonai-Fehir. Today, we are going to have an AI write a story for us, and even do our coding exam materials. And the coolest thing is that both of these will be done by the same AI. How? Well, open AI's GPT-3 technique is capable of all kinds of wizardry as long as it involves text processing. For instance, finishing our sentences or creating plots, spreadsheets, mathematical formulae, and many other things. While their dolly to AI is capable of generating incredible quality images from a written description, even if they are too specific. Way too specific. Now, today we are going to play with GPT-3, the text AI that just gained an amazing new capability. And that is, editing and inserting text. And if you think that this doesn't sound like much, well, check out these examples. Let's start a story. Today is the big day, it says. Now, the AI sees that the name of the section is high school graduation, and it infers quite intelligently that this might be a message from a school to their graduating students. So far so good. But this was possible before too. So, what's new here? Well, hold on to your papers, and now let's do this. Oh, yes. Change the story. Now we don't know why this is the big day, but we know that the next section will be about moving to San Francisco. And oh my, look at that. It understood the whole trick and rewrote the story accordingly. But, you know what? If we wish to move to Istanbul instead, not a problem. The AI gets that one too. Or, if we have grown tired of the city life, we can move to a farm too, and have the AI write out a story for that. Okay, that was fantastic. But, when it comes to completion, I have an idea. Listen, how about pretending that we know how to make hot chocolate? Feel in the last step, which is, you know, not a great contribution, and let the AI do the rest of the dirty work. And can it do that? Wow, it knows exactly that this is a recipe, and it knows how to fill that in correctly. I love it. The put hot chocolate part is a bit of a cop out, but you know what? I'll take it. From the recipe, it sounds like a good one. Cheers. Now, let's write a poem about GPT3. Here we go. It rhymes too. Checkmark. And now, let's ask the AI to rephrase it in its own voice. And yes, it is able to do that while carefully keeping the rhyme intact. And when we ask it to sign it, we get a letter to humanity. How cool is that? Onwards to coding. And this is where things get really interesting. Here is a piece of code for computing Fibonacci numbers. Now translate it into a different programming language. Wow, that was quick. Now rewrite it to do the same thing, but the code has to fit one line. And here comes my favorite, improve the runtime complexity of the code. In other words, make it more efficient. And there we go. Now clearly, you see that this is a simple assignment. Yes, but make no mistake. These are fantastic signs that an AI is now capable of doing meaningful programming work. And if you have a friend who conducted coding interviews, show this to them. I reckon they will say that they have met plenty of human applicants who would do worse than this. If you have your own stories, you know what to do. Of course, leave them in the comment section. I'd love to hear them. And if you have been holding on to your paper so far, now squeeze that paper because here comes the best part. We will fuse the two previous examples together. What does that mean? Well, we will use the hard chocolate playbook and apply it to coding. Let's construct the start and the end of the code and hand it away to the AI. And yes, once again, it has a good idea as to what we are trying to do. It carefully looks at our variable names and feels in the key part of the code in a way that no one will be able to tell that we haven't been doing our work properly. What a time to be alive. So an AI that can write the story for us, even in a way that we only have to give it the scaffolding of the story and it feels in the rest. I love it. So cool. And it can help us with our coding problems and even help us save a ton of time massaging our data into different forms. What an incredible step forward in democratizing these amazing AI's. So what would you use this for? What do you expect to happen? A couple more papers down the line. Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. Get this. They've recently launched an Nvidia RTX 8000 with 48GB of memory. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Thanks for thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.64, "end": 12.72, "text": " Today, we are going to have an AI write a story for us, and even do our coding exam materials."}, {"start": 12.72, "end": 18.88, "text": " And the coolest thing is that both of these will be done by the same AI."}, {"start": 18.88, "end": 19.96, "text": " How?"}, {"start": 19.96, "end": 29.2, "text": " Well, open AI's GPT-3 technique is capable of all kinds of wizardry as long as it involves text processing."}, {"start": 29.2, "end": 38.08, "text": " For instance, finishing our sentences or creating plots, spreadsheets, mathematical formulae, and many other things."}, {"start": 38.08, "end": 46.72, "text": " While their dolly to AI is capable of generating incredible quality images from a written description,"}, {"start": 46.72, "end": 50.08, "text": " even if they are too specific."}, {"start": 50.08, "end": 52.08, "text": " Way too specific."}, {"start": 52.08, "end": 60.96, "text": " Now, today we are going to play with GPT-3, the text AI that just gained an amazing new capability."}, {"start": 60.96, "end": 65.2, "text": " And that is, editing and inserting text."}, {"start": 65.2, "end": 71.03999999999999, "text": " And if you think that this doesn't sound like much, well, check out these examples."}, {"start": 71.03999999999999, "end": 73.2, "text": " Let's start a story."}, {"start": 73.2, "end": 76.08, "text": " Today is the big day, it says."}, {"start": 76.08, "end": 84.64, "text": " Now, the AI sees that the name of the section is high school graduation, and it infers quite intelligently"}, {"start": 84.64, "end": 89.2, "text": " that this might be a message from a school to their graduating students."}, {"start": 89.2, "end": 90.88, "text": " So far so good."}, {"start": 90.88, "end": 94.16, "text": " But this was possible before too."}, {"start": 94.16, "end": 96.4, "text": " So, what's new here?"}, {"start": 96.4, "end": 101.84, "text": " Well, hold on to your papers, and now let's do this."}, {"start": 101.84, "end": 103.12, "text": " Oh, yes."}, {"start": 103.12, "end": 104.72, "text": " Change the story."}, {"start": 104.72, "end": 110.56, "text": " Now we don't know why this is the big day, but we know that the next section will be about"}, {"start": 110.56, "end": 113.2, "text": " moving to San Francisco."}, {"start": 113.2, "end": 116.96, "text": " And oh my, look at that."}, {"start": 116.96, "end": 122.16, "text": " It understood the whole trick and rewrote the story accordingly."}, {"start": 122.16, "end": 124.4, "text": " But, you know what?"}, {"start": 124.4, "end": 128.6, "text": " If we wish to move to Istanbul instead, not a problem."}, {"start": 128.6, "end": 131.2, "text": " The AI gets that one too."}, {"start": 131.2, "end": 138.32, "text": " Or, if we have grown tired of the city life, we can move to a farm too, and have the AI"}, {"start": 138.32, "end": 140.72, "text": " write out a story for that."}, {"start": 140.72, "end": 143.51999999999998, "text": " Okay, that was fantastic."}, {"start": 143.51999999999998, "end": 148.07999999999998, "text": " But, when it comes to completion, I have an idea."}, {"start": 148.07999999999998, "end": 154.07999999999998, "text": " Listen, how about pretending that we know how to make hot chocolate?"}, {"start": 154.07999999999998, "end": 160.48, "text": " Feel in the last step, which is, you know, not a great contribution, and let the AI do"}, {"start": 160.48, "end": 162.88, "text": " the rest of the dirty work."}, {"start": 162.88, "end": 164.79999999999998, "text": " And can it do that?"}, {"start": 164.79999999999998, "end": 172.76, "text": " Wow, it knows exactly that this is a recipe, and it knows how to fill that in correctly."}, {"start": 172.76, "end": 174.64, "text": " I love it."}, {"start": 174.64, "end": 178.76, "text": " The put hot chocolate part is a bit of a cop out, but you know what?"}, {"start": 178.76, "end": 180.32, "text": " I'll take it."}, {"start": 180.32, "end": 183.56, "text": " From the recipe, it sounds like a good one."}, {"start": 183.56, "end": 184.56, "text": " Cheers."}, {"start": 184.56, "end": 189.39999999999998, "text": " Now, let's write a poem about GPT3."}, {"start": 189.4, "end": 190.4, "text": " Here we go."}, {"start": 190.4, "end": 192.76000000000002, "text": " It rhymes too."}, {"start": 192.76000000000002, "end": 193.76000000000002, "text": " Checkmark."}, {"start": 193.76000000000002, "end": 199.16, "text": " And now, let's ask the AI to rephrase it in its own voice."}, {"start": 199.16, "end": 206.4, "text": " And yes, it is able to do that while carefully keeping the rhyme intact."}, {"start": 206.4, "end": 212.20000000000002, "text": " And when we ask it to sign it, we get a letter to humanity."}, {"start": 212.20000000000002, "end": 214.36, "text": " How cool is that?"}, {"start": 214.36, "end": 216.0, "text": " Onwards to coding."}, {"start": 216.0, "end": 219.32, "text": " And this is where things get really interesting."}, {"start": 219.32, "end": 223.51999999999998, "text": " Here is a piece of code for computing Fibonacci numbers."}, {"start": 223.51999999999998, "end": 227.56, "text": " Now translate it into a different programming language."}, {"start": 227.56, "end": 230.16, "text": " Wow, that was quick."}, {"start": 230.16, "end": 237.44, "text": " Now rewrite it to do the same thing, but the code has to fit one line."}, {"start": 237.44, "end": 242.48, "text": " And here comes my favorite, improve the runtime complexity of the code."}, {"start": 242.48, "end": 247.28, "text": " In other words, make it more efficient."}, {"start": 247.28, "end": 248.95999999999998, "text": " And there we go."}, {"start": 248.96, "end": 253.08, "text": " Now clearly, you see that this is a simple assignment."}, {"start": 253.08, "end": 256.28000000000003, "text": " Yes, but make no mistake."}, {"start": 256.28000000000003, "end": 262.6, "text": " These are fantastic signs that an AI is now capable of doing meaningful programming"}, {"start": 262.6, "end": 263.92, "text": " work."}, {"start": 263.92, "end": 268.16, "text": " And if you have a friend who conducted coding interviews, show this to them."}, {"start": 268.16, "end": 273.8, "text": " I reckon they will say that they have met plenty of human applicants who would do worse"}, {"start": 273.8, "end": 274.96000000000004, "text": " than this."}, {"start": 274.96000000000004, "end": 277.96000000000004, "text": " If you have your own stories, you know what to do."}, {"start": 277.96, "end": 280.68, "text": " Of course, leave them in the comment section."}, {"start": 280.68, "end": 282.56, "text": " I'd love to hear them."}, {"start": 282.56, "end": 288.56, "text": " And if you have been holding on to your paper so far, now squeeze that paper because here"}, {"start": 288.56, "end": 290.44, "text": " comes the best part."}, {"start": 290.44, "end": 295.08, "text": " We will fuse the two previous examples together."}, {"start": 295.08, "end": 296.32, "text": " What does that mean?"}, {"start": 296.32, "end": 302.15999999999997, "text": " Well, we will use the hard chocolate playbook and apply it to coding."}, {"start": 302.16, "end": 308.76000000000005, "text": " Let's construct the start and the end of the code and hand it away to the AI."}, {"start": 308.76000000000005, "end": 315.28000000000003, "text": " And yes, once again, it has a good idea as to what we are trying to do."}, {"start": 315.28000000000003, "end": 321.64000000000004, "text": " It carefully looks at our variable names and feels in the key part of the code in a way"}, {"start": 321.64000000000004, "end": 326.84000000000003, "text": " that no one will be able to tell that we haven't been doing our work properly."}, {"start": 326.84000000000003, "end": 328.76000000000005, "text": " What a time to be alive."}, {"start": 328.76, "end": 335.8, "text": " So an AI that can write the story for us, even in a way that we only have to give it the"}, {"start": 335.8, "end": 340.24, "text": " scaffolding of the story and it feels in the rest."}, {"start": 340.24, "end": 341.96, "text": " I love it."}, {"start": 341.96, "end": 343.36, "text": " So cool."}, {"start": 343.36, "end": 350.24, "text": " And it can help us with our coding problems and even help us save a ton of time massaging"}, {"start": 350.24, "end": 352.92, "text": " our data into different forms."}, {"start": 352.92, "end": 358.28, "text": " What an incredible step forward in democratizing these amazing AI's."}, {"start": 358.28, "end": 361.03999999999996, "text": " So what would you use this for?"}, {"start": 361.03999999999996, "end": 362.03999999999996, "text": " What do you expect to happen?"}, {"start": 362.03999999999996, "end": 364.44, "text": " A couple more papers down the line."}, {"start": 364.44, "end": 366.52, "text": " Please let me know in the comments below."}, {"start": 366.52, "end": 368.52, "text": " I'd love to hear your thoughts."}, {"start": 368.52, "end": 372.47999999999996, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 372.47999999999996, "end": 379.67999999999995, "text": " If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 379.67999999999995, "end": 380.67999999999995, "text": " Get this."}, {"start": 380.67999999999995, "end": 388.23999999999995, "text": " They've recently launched an Nvidia RTX 8000 with 48GB of memory."}, {"start": 388.24, "end": 395.84000000000003, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and"}, {"start": 395.84000000000003, "end": 397.16, "text": " Azure."}, {"start": 397.16, "end": 404.8, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 404.8, "end": 407.6, "text": " workstations or servers."}, {"start": 407.6, "end": 414.92, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 414.92, "end": 415.92, "text": " today."}, {"start": 415.92, "end": 421.56, "text": " Thanks for thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 421.56, "end": 422.56, "text": " for you."}, {"start": 422.56, "end": 449.32, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=X3_LD3R_Ygs
OpenAI DALL-E 2: Top 10 Insane Results! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://openai.com/dall-e-2/ https://www.instagram.com/openaidalle/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters 00:00 Intro 00:34 GPT-3 - OpenAI's Text Magic 01:18 Image-GPT Was Born 01:55 Dall-E 02:44 Dall-E 2! 03:30 1. Panda mad scientist 03:55 2. Teddy bear mad scientists 04:20 3. Teddy skating on Times Square 05:05 4. Nebula dunking 05:30 5. Cat Napoleon 05:57 6. Flamingos everywhere! 06:49 7. Don't forget the corgis! 07:43 8. It can do interior design! 08:50 9. Dall-E 1 vs Dall-E 2 09:28 10. Not perfect 09:57 Bonus: Hold on to your papers! 10:18 It draws itself 10:42 One more thing 11:07 Another legendary paper Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAI #dalle #dalle2
In dear fellow scholars, this is two minute papers with Dr. Karo Zorna Ifeher. Today, I am so excited to show you this. Look, we are going to have an AI look at 650 million images on the internet and then ask it to generate the craziest synthetic images I have ever seen. And wait, it gets better, we will also see what this AI thinks it looks like. Spoiler alert, it appears to be cuddly. You'll see. So, what is all this about? Well, in June 2020, OpenAI created GPT-3, a magical AI that could finish your sentences and among many incredible examples, it could generate website layouts from a written description. This opened the door for a ton of cool applications, but note that all of these applications are built on understanding text. However, no one said that these neural networks can only deal with text information and sure enough, a few months later, scientists at OpenAI thought that if we can complete text sentences, why not try to complete images too? And thus, image GPT was born. The problem statement was simple, we give it an incomplete image and we ask the AI to fill in the missing pixels. If we give it this image, it understood that these birds are likely standing on something and it even has several ideas as to what that might be. Look, a branch, a stone, or they can even stand in water and amazingly, even their mirror images are created by the AI. But then, scientists at OpenAI thought, why not have the user write a text description and get them a really well done image of exactly that? That sounds cool, and it gets even cooler the crazier the ideas we give to it. The name of this technique is a mix of Salvador Dali and Pixar's Wally. So, please meet Dali. This could do a ton. For instance, this understands styles and rendering techniques. Being a computer graphics person, I am so happy to see that it learned the concept of low polygon counter-rendering, isometric views, clay objects, and we can even add an x-ray view to the owl. Kind of. And now, just a year later, would you look at that? Oh, wow! Here is Dali too. Oh, my! I cannot tell you how excited I am to have a closer look at the results. Let's dive in together. So, what can it do? Well, that's not the right question. By the end of this video, I bet you will think that the more appropriate question would be, what can it do? This one can take descriptions that are so specific, I would say, that perhaps even a good human artist might have trouble with. Now, hold on to your papers and have a look at ten of my favorite examples. One, a panda, mad scientist, mixing sparkling chemicals. Wow! Look at that! This is something else. It even has sunglasses for extra street cred and the reflections of the questionable substance it is researching are also present on its sunglasses. But, the mad science doesn't stop there. Two, teddy bears mixing sparkling chemicals as mad scientists. But, at this point, we already know that doing this would be too easy for the AI. So, let's do it in multiple styles. First, steampunk, second, 1990s Saturday morning cartoon, and third, digital art. It can pull off all of these. Three, now about variants. Give me a teddy bear on a skateboard in Times Square. Now, this is interesting for multiple reasons. For instance, you see that it can generate a ton of variants. That is fantastic. As a light transport researcher, I cannot resist mentioning how nice of a depth of field effect it is able to make. And, would you look at that? It also knows about the highly sought after signature effect of the lights blurred into these beautiful balls in the background. The AI understands these bouquet balls and the fact that it can harness this kind of knowledge is absolutely amazing. Four, and if you think that it's too specific, you have seen nothing yet. Check this out. An expressive oil painting of a basketball player, dunking depicted as an explosion of a nebula. I love it. It also has a nice space gem quality to it. Well done little AI. So good. Five, you know what? I want even more specific and even more ridiculous images. A propaganda poster depicting a cat and dressed as French emperor, Napoleon holding a piece of cheese. Now that is way too specific. Nobody can pull that off. There is no way that... Wow! I think we have a winner here. When the next election is coming up, you know what to do. Six, and still, once again, believe it or not, you have seen nothing yet. We can get even more specific. So much so that we can even edit an image that is already done. For instance, if we feel that this image is missing a flamingo, we can request that it is placed there, but we can even specify the location for it. And... Even the reflections are created for it, and they are absolutely beautiful. Now note that I think if there are reflections here, then perhaps there should have been reflections here too. A perfect test for one more paper down the line, one dolly three arrives. Make sure to subscribe and hit the bell icon. You really don't want to miss it. Seven, this one puts up a clinic in understanding the world around us. This image is missing something. Missing what? Well, of course, corgis. And I cannot believe this. If we specify the painting as the location, it will not only have a painterly style, but one that already matches the painting on the wall. This is true for the other painting too. This is incredible. I absolutely love it. And last test, does it? Yes, it does. If we are outside of the painting at the photo part, this good boy becomes photorealistic. Requesting variance is also a possibility here. So, what do you think? Which is the best boy? Let me know in the comments below. And number eight. If we can place any object anywhere, this is an excellent tool to perform interior design. We can put a couch wherever we please, and I am already looking forward to inspecting the reflections here. Oh yes, this is very difficult to compute. This is not a matte diffuse object, and not even a mirror-like specular surface, but a glossary reflection that is somewhere in between. But it gets worse. This is a textured object, which also has to be taken into consideration and proper shadows also have to be generated in a difficult situation where light comes from a ton of different directions. This is a nightmare, and the results are not perfect, but my goodness, if this is not an AI that has a proper understanding of the world around us, I don't know what is. Absolutely incredible progress in just one year. I cannot believe my eyes. You know what? Number nine. Actually, let's look at how much it has improved since Dolly won side by side. Oh my. Now, there is no contest here. Dolly too is on a completely different level from its first iteration. This is so much better, and once again, such improvement in just a year. What a time to be alive. And what do you think if Dolly 3 appears, what will it be capable of? What would you use this, or Dolly 3-4? Please let me know in the comments below. I'd love to know what you think. Now, of course, not even Dolly 2 is perfect. Look at that. Number 10. Inspect the pictures and tell me what you think the prompt for this must have been. Not easy, right? Let me know your tips in the comments below. Well, it was a sign that says deep learning. Well, A plus for effort, little AI, but this is certainly one of the failure cases. And you know what? I cannot resist. Plus one. If you have been holding onto your papers, now squeeze that paper at least as hard as these colors are squeezing their papers. So, which one are you? Which one resembles your reaction to this paper the best? Let me know in the comments below. And yes, as promised, here is what it thinks of itself. It is very soft and cuddly. Or at least it wants us to think that it is so. Food for thought. And if you speak robot and have any idea what this writing could mean, make sure to let me know below. And one more thing. We noted that this AI was trained on 650 million images and uses 3.5 billion parameters. These are not rookie numbers by any stretch of the imagination. However, I am hoping that with this, there will be a chance that other independent groups will also be able to train and use their own dolly too. And just in the last few years, OpenAI has given us legendary papers. For instance, an AI that can play hide and seek, solve math tests, or play a game called Dota 2 on a world champion level. Given these, I hereby appoint Dolly into the pantheon of these legendary works. And I have to say, I am super excited to see what they come up with next. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. And get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Asia. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " In dear fellow scholars, this is two minute papers with Dr. Karo Zorna Ifeher."}, {"start": 4.76, "end": 8.6, "text": " Today, I am so excited to show you this."}, {"start": 8.6, "end": 16.2, "text": " Look, we are going to have an AI look at 650 million images on the internet"}, {"start": 16.2, "end": 22.400000000000002, "text": " and then ask it to generate the craziest synthetic images I have ever seen."}, {"start": 22.400000000000002, "end": 29.0, "text": " And wait, it gets better, we will also see what this AI thinks it looks like."}, {"start": 29.0, "end": 32.6, "text": " Spoiler alert, it appears to be cuddly."}, {"start": 32.6, "end": 33.6, "text": " You'll see."}, {"start": 33.6, "end": 35.8, "text": " So, what is all this about?"}, {"start": 35.8, "end": 44.4, "text": " Well, in June 2020, OpenAI created GPT-3, a magical AI that could finish your sentences"}, {"start": 44.4, "end": 51.400000000000006, "text": " and among many incredible examples, it could generate website layouts from a written description."}, {"start": 51.400000000000006, "end": 55.2, "text": " This opened the door for a ton of cool applications,"}, {"start": 55.2, "end": 61.6, "text": " but note that all of these applications are built on understanding text."}, {"start": 61.6, "end": 67.60000000000001, "text": " However, no one said that these neural networks can only deal with text information"}, {"start": 67.60000000000001, "end": 73.2, "text": " and sure enough, a few months later, scientists at OpenAI thought"}, {"start": 73.2, "end": 80.2, "text": " that if we can complete text sentences, why not try to complete images too?"}, {"start": 80.2, "end": 83.2, "text": " And thus, image GPT was born."}, {"start": 83.2, "end": 87.4, "text": " The problem statement was simple, we give it an incomplete image"}, {"start": 87.4, "end": 91.8, "text": " and we ask the AI to fill in the missing pixels."}, {"start": 91.8, "end": 98.60000000000001, "text": " If we give it this image, it understood that these birds are likely standing on something"}, {"start": 98.60000000000001, "end": 103.4, "text": " and it even has several ideas as to what that might be."}, {"start": 103.4, "end": 109.60000000000001, "text": " Look, a branch, a stone, or they can even stand in water"}, {"start": 109.6, "end": 115.8, "text": " and amazingly, even their mirror images are created by the AI."}, {"start": 115.8, "end": 123.0, "text": " But then, scientists at OpenAI thought, why not have the user write a text description"}, {"start": 123.0, "end": 127.6, "text": " and get them a really well done image of exactly that?"}, {"start": 127.6, "end": 133.6, "text": " That sounds cool, and it gets even cooler the crazier the ideas we give to it."}, {"start": 133.6, "end": 139.6, "text": " The name of this technique is a mix of Salvador Dali and Pixar's Wally."}, {"start": 139.6, "end": 144.0, "text": " So, please meet Dali. This could do a ton."}, {"start": 144.0, "end": 149.4, "text": " For instance, this understands styles and rendering techniques."}, {"start": 149.4, "end": 155.0, "text": " Being a computer graphics person, I am so happy to see that it learned the concept"}, {"start": 155.0, "end": 160.0, "text": " of low polygon counter-rendering, isometric views, clay objects,"}, {"start": 160.0, "end": 164.6, "text": " and we can even add an x-ray view to the owl."}, {"start": 164.6, "end": 170.2, "text": " Kind of. And now, just a year later, would you look at that?"}, {"start": 170.2, "end": 174.0, "text": " Oh, wow! Here is Dali too."}, {"start": 174.0, "end": 180.2, "text": " Oh, my! I cannot tell you how excited I am to have a closer look at the results."}, {"start": 180.2, "end": 184.2, "text": " Let's dive in together. So, what can it do?"}, {"start": 184.2, "end": 187.0, "text": " Well, that's not the right question."}, {"start": 187.0, "end": 192.4, "text": " By the end of this video, I bet you will think that the more appropriate question would be,"}, {"start": 192.4, "end": 194.0, "text": " what can it do?"}, {"start": 194.0, "end": 198.4, "text": " This one can take descriptions that are so specific, I would say,"}, {"start": 198.4, "end": 203.0, "text": " that perhaps even a good human artist might have trouble with."}, {"start": 203.0, "end": 209.4, "text": " Now, hold on to your papers and have a look at ten of my favorite examples."}, {"start": 209.4, "end": 215.0, "text": " One, a panda, mad scientist, mixing sparkling chemicals."}, {"start": 215.0, "end": 219.4, "text": " Wow! Look at that! This is something else."}, {"start": 219.4, "end": 223.2, "text": " It even has sunglasses for extra street cred"}, {"start": 223.2, "end": 227.6, "text": " and the reflections of the questionable substance it is researching"}, {"start": 227.6, "end": 230.4, "text": " are also present on its sunglasses."}, {"start": 230.4, "end": 233.8, "text": " But, the mad science doesn't stop there."}, {"start": 233.8, "end": 239.2, "text": " Two, teddy bears mixing sparkling chemicals as mad scientists."}, {"start": 239.2, "end": 245.6, "text": " But, at this point, we already know that doing this would be too easy for the AI."}, {"start": 245.6, "end": 249.0, "text": " So, let's do it in multiple styles."}, {"start": 249.0, "end": 255.6, "text": " First, steampunk, second, 1990s Saturday morning cartoon,"}, {"start": 255.6, "end": 258.4, "text": " and third, digital art."}, {"start": 258.4, "end": 261.0, "text": " It can pull off all of these."}, {"start": 261.0, "end": 263.8, "text": " Three, now about variants."}, {"start": 263.8, "end": 268.4, "text": " Give me a teddy bear on a skateboard in Times Square."}, {"start": 268.4, "end": 271.79999999999995, "text": " Now, this is interesting for multiple reasons."}, {"start": 271.79999999999995, "end": 276.2, "text": " For instance, you see that it can generate a ton of variants."}, {"start": 276.2, "end": 278.2, "text": " That is fantastic."}, {"start": 278.2, "end": 282.0, "text": " As a light transport researcher, I cannot resist mentioning"}, {"start": 282.0, "end": 285.79999999999995, "text": " how nice of a depth of field effect it is able to make."}, {"start": 285.79999999999995, "end": 288.4, "text": " And, would you look at that?"}, {"start": 288.4, "end": 292.79999999999995, "text": " It also knows about the highly sought after signature effect"}, {"start": 292.79999999999995, "end": 297.59999999999997, "text": " of the lights blurred into these beautiful balls in the background."}, {"start": 297.6, "end": 300.8, "text": " The AI understands these bouquet balls"}, {"start": 300.8, "end": 306.20000000000005, "text": " and the fact that it can harness this kind of knowledge is absolutely amazing."}, {"start": 306.20000000000005, "end": 311.40000000000003, "text": " Four, and if you think that it's too specific, you have seen nothing yet."}, {"start": 311.40000000000003, "end": 312.8, "text": " Check this out."}, {"start": 312.8, "end": 316.6, "text": " An expressive oil painting of a basketball player,"}, {"start": 316.6, "end": 321.40000000000003, "text": " dunking depicted as an explosion of a nebula."}, {"start": 321.40000000000003, "end": 323.0, "text": " I love it."}, {"start": 323.0, "end": 327.0, "text": " It also has a nice space gem quality to it."}, {"start": 327.0, "end": 329.0, "text": " Well done little AI."}, {"start": 329.0, "end": 330.6, "text": " So good."}, {"start": 330.6, "end": 332.4, "text": " Five, you know what?"}, {"start": 332.4, "end": 336.8, "text": " I want even more specific and even more ridiculous images."}, {"start": 336.8, "end": 345.4, "text": " A propaganda poster depicting a cat and dressed as French emperor, Napoleon holding a piece of cheese."}, {"start": 345.4, "end": 348.2, "text": " Now that is way too specific."}, {"start": 348.2, "end": 349.8, "text": " Nobody can pull that off."}, {"start": 349.8, "end": 351.6, "text": " There is no way that..."}, {"start": 351.6, "end": 352.6, "text": " Wow!"}, {"start": 352.6, "end": 354.6, "text": " I think we have a winner here."}, {"start": 354.6, "end": 358.20000000000005, "text": " When the next election is coming up, you know what to do."}, {"start": 358.20000000000005, "end": 364.0, "text": " Six, and still, once again, believe it or not, you have seen nothing yet."}, {"start": 364.0, "end": 366.40000000000003, "text": " We can get even more specific."}, {"start": 366.40000000000003, "end": 371.6, "text": " So much so that we can even edit an image that is already done."}, {"start": 371.6, "end": 376.0, "text": " For instance, if we feel that this image is missing a flamingo,"}, {"start": 376.0, "end": 378.8, "text": " we can request that it is placed there,"}, {"start": 378.8, "end": 383.0, "text": " but we can even specify the location for it."}, {"start": 383.0, "end": 385.0, "text": " And..."}, {"start": 385.0, "end": 390.8, "text": " Even the reflections are created for it, and they are absolutely beautiful."}, {"start": 390.8, "end": 394.6, "text": " Now note that I think if there are reflections here,"}, {"start": 394.6, "end": 399.2, "text": " then perhaps there should have been reflections here too."}, {"start": 399.2, "end": 404.2, "text": " A perfect test for one more paper down the line, one dolly three arrives."}, {"start": 404.2, "end": 407.2, "text": " Make sure to subscribe and hit the bell icon."}, {"start": 407.2, "end": 409.2, "text": " You really don't want to miss it."}, {"start": 409.2, "end": 415.0, "text": " Seven, this one puts up a clinic in understanding the world around us."}, {"start": 415.0, "end": 417.4, "text": " This image is missing something."}, {"start": 417.4, "end": 419.0, "text": " Missing what?"}, {"start": 419.0, "end": 422.0, "text": " Well, of course, corgis."}, {"start": 422.0, "end": 424.4, "text": " And I cannot believe this."}, {"start": 424.4, "end": 427.59999999999997, "text": " If we specify the painting as the location,"}, {"start": 427.59999999999997, "end": 430.4, "text": " it will not only have a painterly style,"}, {"start": 430.4, "end": 435.4, "text": " but one that already matches the painting on the wall."}, {"start": 435.4, "end": 438.2, "text": " This is true for the other painting too."}, {"start": 438.2, "end": 440.2, "text": " This is incredible."}, {"start": 440.2, "end": 442.59999999999997, "text": " I absolutely love it."}, {"start": 442.59999999999997, "end": 446.0, "text": " And last test, does it?"}, {"start": 446.0, "end": 447.2, "text": " Yes, it does."}, {"start": 447.2, "end": 451.0, "text": " If we are outside of the painting at the photo part,"}, {"start": 451.0, "end": 454.0, "text": " this good boy becomes photorealistic."}, {"start": 454.0, "end": 457.4, "text": " Requesting variance is also a possibility here."}, {"start": 457.4, "end": 459.4, "text": " So, what do you think?"}, {"start": 459.4, "end": 461.2, "text": " Which is the best boy?"}, {"start": 461.2, "end": 463.4, "text": " Let me know in the comments below."}, {"start": 463.4, "end": 465.2, "text": " And number eight."}, {"start": 465.2, "end": 468.4, "text": " If we can place any object anywhere,"}, {"start": 468.4, "end": 472.59999999999997, "text": " this is an excellent tool to perform interior design."}, {"start": 472.59999999999997, "end": 475.2, "text": " We can put a couch wherever we please,"}, {"start": 475.2, "end": 480.0, "text": " and I am already looking forward to inspecting the reflections here."}, {"start": 480.0, "end": 484.0, "text": " Oh yes, this is very difficult to compute."}, {"start": 484.0, "end": 487.2, "text": " This is not a matte diffuse object,"}, {"start": 487.2, "end": 491.0, "text": " and not even a mirror-like specular surface,"}, {"start": 491.0, "end": 495.6, "text": " but a glossary reflection that is somewhere in between."}, {"start": 495.6, "end": 497.6, "text": " But it gets worse."}, {"start": 497.6, "end": 499.6, "text": " This is a textured object,"}, {"start": 499.6, "end": 502.6, "text": " which also has to be taken into consideration"}, {"start": 502.6, "end": 505.8, "text": " and proper shadows also have to be generated"}, {"start": 505.8, "end": 507.8, "text": " in a difficult situation"}, {"start": 507.8, "end": 511.4, "text": " where light comes from a ton of different directions."}, {"start": 511.4, "end": 513.2, "text": " This is a nightmare,"}, {"start": 513.2, "end": 515.2, "text": " and the results are not perfect,"}, {"start": 515.2, "end": 516.8, "text": " but my goodness,"}, {"start": 516.8, "end": 518.6, "text": " if this is not an AI"}, {"start": 518.6, "end": 522.2, "text": " that has a proper understanding of the world around us,"}, {"start": 522.2, "end": 524.2, "text": " I don't know what is."}, {"start": 524.2, "end": 528.2, "text": " Absolutely incredible progress in just one year."}, {"start": 528.2, "end": 530.7, "text": " I cannot believe my eyes."}, {"start": 530.7, "end": 531.8000000000001, "text": " You know what?"}, {"start": 531.8000000000001, "end": 533.0, "text": " Number nine."}, {"start": 533.0, "end": 536.0, "text": " Actually, let's look at how much it has improved"}, {"start": 536.0, "end": 539.2, "text": " since Dolly won side by side."}, {"start": 539.2, "end": 540.6, "text": " Oh my."}, {"start": 540.6, "end": 543.0, "text": " Now, there is no contest here."}, {"start": 543.0, "end": 545.8000000000001, "text": " Dolly too is on a completely different level"}, {"start": 545.8000000000001, "end": 547.6, "text": " from its first iteration."}, {"start": 547.6, "end": 549.7, "text": " This is so much better,"}, {"start": 549.7, "end": 553.6, "text": " and once again, such improvement in just a year."}, {"start": 553.6, "end": 555.6, "text": " What a time to be alive."}, {"start": 555.6, "end": 559.2, "text": " And what do you think if Dolly 3 appears,"}, {"start": 559.2, "end": 561.4, "text": " what will it be capable of?"}, {"start": 561.4, "end": 564.6, "text": " What would you use this, or Dolly 3-4?"}, {"start": 564.6, "end": 567.0, "text": " Please let me know in the comments below."}, {"start": 567.0, "end": 569.0, "text": " I'd love to know what you think."}, {"start": 569.0, "end": 573.0, "text": " Now, of course, not even Dolly 2 is perfect."}, {"start": 573.0, "end": 574.5, "text": " Look at that."}, {"start": 574.5, "end": 575.7, "text": " Number 10."}, {"start": 575.7, "end": 579.2, "text": " Inspect the pictures and tell me what you think"}, {"start": 579.2, "end": 581.9000000000001, "text": " the prompt for this must have been."}, {"start": 581.9000000000001, "end": 583.8000000000001, "text": " Not easy, right?"}, {"start": 583.8000000000001, "end": 586.8000000000001, "text": " Let me know your tips in the comments below."}, {"start": 586.8000000000001, "end": 591.5, "text": " Well, it was a sign that says deep learning."}, {"start": 591.5, "end": 594.3000000000001, "text": " Well, A plus for effort, little AI,"}, {"start": 594.3000000000001, "end": 597.8000000000001, "text": " but this is certainly one of the failure cases."}, {"start": 597.8000000000001, "end": 599.3000000000001, "text": " And you know what?"}, {"start": 599.3000000000001, "end": 601.0, "text": " I cannot resist."}, {"start": 601.0, "end": 601.9000000000001, "text": " Plus one."}, {"start": 601.9000000000001, "end": 604.1, "text": " If you have been holding onto your papers,"}, {"start": 604.1, "end": 607.8000000000001, "text": " now squeeze that paper at least as hard"}, {"start": 607.8000000000001, "end": 611.3000000000001, "text": " as these colors are squeezing their papers."}, {"start": 611.3000000000001, "end": 613.5, "text": " So, which one are you?"}, {"start": 613.5, "end": 617.4, "text": " Which one resembles your reaction to this paper the best?"}, {"start": 617.4, "end": 619.5, "text": " Let me know in the comments below."}, {"start": 619.5, "end": 625.0, "text": " And yes, as promised, here is what it thinks of itself."}, {"start": 625.0, "end": 628.5, "text": " It is very soft and cuddly."}, {"start": 628.5, "end": 632.8000000000001, "text": " Or at least it wants us to think that it is so."}, {"start": 632.8, "end": 634.3, "text": " Food for thought."}, {"start": 634.3, "end": 637.8, "text": " And if you speak robot and have any idea"}, {"start": 637.8, "end": 641.5, "text": " what this writing could mean, make sure to let me know below."}, {"start": 641.5, "end": 643.3, "text": " And one more thing."}, {"start": 643.3, "end": 649.3, "text": " We noted that this AI was trained on 650 million images"}, {"start": 649.3, "end": 652.6999999999999, "text": " and uses 3.5 billion parameters."}, {"start": 652.6999999999999, "end": 656.8, "text": " These are not rookie numbers by any stretch of the imagination."}, {"start": 656.8, "end": 659.5, "text": " However, I am hoping that with this,"}, {"start": 659.5, "end": 662.8, "text": " there will be a chance that other independent groups"}, {"start": 662.8, "end": 667.2, "text": " will also be able to train and use their own dolly too."}, {"start": 667.2, "end": 669.7, "text": " And just in the last few years,"}, {"start": 669.7, "end": 673.1, "text": " OpenAI has given us legendary papers."}, {"start": 673.1, "end": 677.4, "text": " For instance, an AI that can play hide and seek,"}, {"start": 677.4, "end": 679.4, "text": " solve math tests,"}, {"start": 679.4, "end": 684.3, "text": " or play a game called Dota 2 on a world champion level."}, {"start": 684.3, "end": 687.5, "text": " Given these, I hereby appoint Dolly"}, {"start": 687.5, "end": 691.0, "text": " into the pantheon of these legendary works."}, {"start": 691.0, "end": 694.7, "text": " And I have to say, I am super excited to see"}, {"start": 694.7, "end": 696.7, "text": " what they come up with next."}, {"start": 696.7, "end": 700.7, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 700.7, "end": 704.9, "text": " If you are looking for inexpensive Cloud GPUs for AI,"}, {"start": 704.9, "end": 707.7, "text": " check out Lambda GPU Cloud."}, {"start": 707.7, "end": 713.2, "text": " And get this, they've recently launched an NVIDIA RTX 8000"}, {"start": 713.2, "end": 716.3, "text": " with 48GB of memory."}, {"start": 716.3, "end": 718.4, "text": " And hold onto your papers,"}, {"start": 718.4, "end": 725.0999999999999, "text": " because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 725.0999999999999, "end": 728.5999999999999, "text": " Join researchers at organizations like Apple,"}, {"start": 728.5999999999999, "end": 732.9, "text": " MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 732.9, "end": 735.5999999999999, "text": " workstations, or servers."}, {"start": 735.5999999999999, "end": 738.1999999999999, "text": " Make sure to go to LambdaLabs.com,"}, {"start": 738.1999999999999, "end": 744.0999999999999, "text": " slash papers to sign up for one of their amazing GPU instances today."}, {"start": 744.1, "end": 747.2, "text": " Our thanks to Lambda for their longstanding support"}, {"start": 747.2, "end": 750.6, "text": " and for helping us make better videos for you."}, {"start": 750.6, "end": 752.9, "text": " Thanks for watching and for your generous support,"}, {"start": 752.9, "end": 782.6999999999999, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cS4jCvzey-4
NVIDIA's New AI: Next Level Image Editing! 👌
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (thank you Soumik Rakshit!): http://wandb.me/EditGAN 📝 The paper "EditGAN: High-Precision Semantic Image Editing" is available here: https://nv-tlabs.github.io/editGAN/ https://arxiv.org/abs/2111.03186 https://github.com/nv-tlabs/editGAN_release https://nv-tlabs.github.io/editGAN/editGAN_supp_compressed.pdf ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA #EditGAN
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today, we are going to do some incredible experiments with NVIDIA's next-level image editor AI. Now, there are already many AI-based techniques out there that are extremely good at creating no human or even animal faces. You see here how beautifully this Elias-Freegan technique can morph one result into another. It truly is a sight to behold. These are really good at editing images. You see, this was possible just a year ago. But this no one, this can do more semantic edits to our images. Let's dive in and see all the amazing things that it can do. If we take a photo of our friends we haven't seen in a while, what happens? Well, of course, the eyes are closed or are barely open. And from now on, this is not a problem. Done. And if we wish to add a smile or take it away, boom, that is also possible. Also, the universal classic, looking too much into the camera, not a problem. Done. Now, the AI can do even more kinds of edits, hairstyle, eyebrows, wrinkles, you name it. However, that's not even the best part. You have seen nothing yet. Are you ready for the best part? Hold on to your papers and check this out. Yes, it even works on drawings, paintings. And, oh my, even statues as well. Absolutely amazing. How cool is that? I love it. Researchers refer to these as out of domain examples. The best part is that this is a proper learning-based method. This means that by learning on human portraits, it has obtained general knowledge. So now, it doesn't just understand these as clumps of pixels, it now understands concepts. Thus, it can reuse its knowledge, even when facing a completely different kind of image, just like the ones you see here. This looks like science fiction, and here it is, right in front of our eyes. Wow. But it doesn't stop there. It can also perform semantic image editing. What is that? Well, look. We can upload an image, look at the labels of these images, and edit the labels themselves. Well, okay, but what is all this good for? Well, the AI understands how these labels correspond to the real photo. So, check this out. We do the easy part, edit the labels, and the AI does the hard part, which is changing the photo appropriately. Look. Yes, this is just incredible. The best part is that we are now even seeing a hint of creativity with some of these solutions. And if you're one of those folks who feel like the wheels and rims are never quite big enough, well, Nvidia has got you covered, that's for sure. And here comes the kicker. It learned to create these labels automatically by itself. So, how many labels to image pairs did it have to look at to perform all this? Well, what do you think? Millions, or maybe hundreds of thousands? Please leave a comment. I'd love to hear what you think. And the answer is 16. What? 16 million? No, 16. Two to the power of four. And that's it. Well, that is one of the most jaw-dropping facts about this paper. This AI can learn general concepts from very few examples. We don't need to label the entirety of the internet to have this technique be able to do its magic. That is absolutely incredible. Really learning from just 16 examples. Wow! The supplementary materials also showcase loads of results, so make sure to have a look at that. If you do, you'll find that it can even take an image of a bird and adjust their big sizes even to extreme proportions. Very amusing. Or we can even ask them to look up. And I have to say, if no one would tell me that these are synthetic images, I might not be able to tell. Now note that some of these capabilities were present in previous techniques, but this new method does all of these with really high quality and all this in one elegant package. What a time to be alive! Now, of course, not even this technique is perfect, there are still cases that are so far outside of the training set of the AI that the result becomes completely unusable. And this is the perfect place to invoke the first law of papers. What is that? Well, the first law of papers says that research is a process. Don't look at where we are, look at where we will be, two more papers down the line. And don't forget, a couple papers before this, we were lucky to do this. And now, see how far we've come. This is incredible progress in just one year. So, what do you think? What would you use this for? I'd love to hear your thoughts, so please let me know in the comments below. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. Weight and biases provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.64, "end": 12.24, "text": " Today, we are going to do some incredible experiments with NVIDIA's next-level image editor AI."}, {"start": 12.24, "end": 20.240000000000002, "text": " Now, there are already many AI-based techniques out there that are extremely good at creating no human"}, {"start": 20.240000000000002, "end": 22.8, "text": " or even animal faces."}, {"start": 22.8, "end": 30.16, "text": " You see here how beautifully this Elias-Freegan technique can morph one result into another."}, {"start": 31.04, "end": 37.04, "text": " It truly is a sight to behold. These are really good at editing images."}, {"start": 37.04, "end": 46.08, "text": " You see, this was possible just a year ago. But this no one, this can do more semantic edits to our"}, {"start": 46.08, "end": 51.28, "text": " images. Let's dive in and see all the amazing things that it can do."}, {"start": 51.28, "end": 56.4, "text": " If we take a photo of our friends we haven't seen in a while, what happens?"}, {"start": 56.4, "end": 61.92, "text": " Well, of course, the eyes are closed or are barely open."}, {"start": 61.92, "end": 66.64, "text": " And from now on, this is not a problem. Done."}, {"start": 66.64, "end": 74.08, "text": " And if we wish to add a smile or take it away, boom, that is also possible."}, {"start": 74.08, "end": 80.24000000000001, "text": " Also, the universal classic, looking too much into the camera, not a problem."}, {"start": 80.24, "end": 89.28, "text": " Done. Now, the AI can do even more kinds of edits, hairstyle, eyebrows, wrinkles, you name it."}, {"start": 89.28, "end": 96.39999999999999, "text": " However, that's not even the best part. You have seen nothing yet. Are you ready for the best part?"}, {"start": 96.39999999999999, "end": 104.24, "text": " Hold on to your papers and check this out. Yes, it even works on drawings, paintings."}, {"start": 104.24, "end": 109.83999999999999, "text": " And, oh my, even statues as well."}, {"start": 109.83999999999999, "end": 118.8, "text": " Absolutely amazing. How cool is that? I love it. Researchers refer to these as out of domain"}, {"start": 118.8, "end": 125.6, "text": " examples. The best part is that this is a proper learning-based method. This means that by learning"}, {"start": 125.6, "end": 132.24, "text": " on human portraits, it has obtained general knowledge. So now, it doesn't just understand these"}, {"start": 132.24, "end": 139.44, "text": " as clumps of pixels, it now understands concepts. Thus, it can reuse its knowledge,"}, {"start": 139.44, "end": 144.64000000000001, "text": " even when facing a completely different kind of image, just like the ones you see here."}, {"start": 145.28, "end": 152.0, "text": " This looks like science fiction, and here it is, right in front of our eyes. Wow."}, {"start": 152.88, "end": 159.60000000000002, "text": " But it doesn't stop there. It can also perform semantic image editing. What is that?"}, {"start": 159.6, "end": 168.24, "text": " Well, look. We can upload an image, look at the labels of these images, and edit the labels themselves."}, {"start": 169.12, "end": 177.68, "text": " Well, okay, but what is all this good for? Well, the AI understands how these labels correspond"}, {"start": 177.68, "end": 186.48, "text": " to the real photo. So, check this out. We do the easy part, edit the labels, and the AI does the"}, {"start": 186.48, "end": 195.51999999999998, "text": " hard part, which is changing the photo appropriately. Look. Yes, this is just incredible."}, {"start": 195.51999999999998, "end": 201.76, "text": " The best part is that we are now even seeing a hint of creativity with some of these solutions."}, {"start": 202.56, "end": 208.64, "text": " And if you're one of those folks who feel like the wheels and rims are never quite big enough,"}, {"start": 208.64, "end": 215.83999999999997, "text": " well, Nvidia has got you covered, that's for sure. And here comes the kicker. It learned to create"}, {"start": 215.84, "end": 223.52, "text": " these labels automatically by itself. So, how many labels to image pairs did it have to look at"}, {"start": 223.52, "end": 231.12, "text": " to perform all this? Well, what do you think? Millions, or maybe hundreds of thousands?"}, {"start": 231.76, "end": 241.12, "text": " Please leave a comment. I'd love to hear what you think. And the answer is 16. What? 16 million?"}, {"start": 241.12, "end": 251.12, "text": " No, 16. Two to the power of four. And that's it. Well, that is one of the most jaw-dropping facts"}, {"start": 251.12, "end": 259.04, "text": " about this paper. This AI can learn general concepts from very few examples. We don't need to label"}, {"start": 259.04, "end": 266.08, "text": " the entirety of the internet to have this technique be able to do its magic. That is absolutely"}, {"start": 266.08, "end": 274.88, "text": " incredible. Really learning from just 16 examples. Wow! The supplementary materials also showcase"}, {"start": 274.88, "end": 282.15999999999997, "text": " loads of results, so make sure to have a look at that. If you do, you'll find that it can even take"}, {"start": 282.15999999999997, "end": 291.76, "text": " an image of a bird and adjust their big sizes even to extreme proportions. Very amusing. Or we can"}, {"start": 291.76, "end": 298.71999999999997, "text": " even ask them to look up. And I have to say, if no one would tell me that these are synthetic"}, {"start": 298.71999999999997, "end": 305.36, "text": " images, I might not be able to tell. Now note that some of these capabilities were present in"}, {"start": 305.36, "end": 312.8, "text": " previous techniques, but this new method does all of these with really high quality and all this"}, {"start": 312.8, "end": 320.4, "text": " in one elegant package. What a time to be alive! Now, of course, not even this technique is perfect,"}, {"start": 320.4, "end": 326.4, "text": " there are still cases that are so far outside of the training set of the AI that the result"}, {"start": 326.4, "end": 333.12, "text": " becomes completely unusable. And this is the perfect place to invoke the first law of papers."}, {"start": 333.84, "end": 341.35999999999996, "text": " What is that? Well, the first law of papers says that research is a process. Don't look at where we"}, {"start": 341.35999999999996, "end": 348.4, "text": " are, look at where we will be, two more papers down the line. And don't forget, a couple papers"}, {"start": 348.4, "end": 358.64, "text": " before this, we were lucky to do this. And now, see how far we've come. This is incredible progress"}, {"start": 358.64, "end": 366.23999999999995, "text": " in just one year. So, what do you think? What would you use this for? I'd love to hear your thoughts,"}, {"start": 366.23999999999995, "end": 372.15999999999997, "text": " so please let me know in the comments below. What you see here is a report of this exact paper we"}, {"start": 372.15999999999997, "end": 377.35999999999996, "text": " have talked about which was made by weights and biases. I put a link to it in the description,"}, {"start": 377.36, "end": 381.52000000000004, "text": " make sure to have a look, I think it helps you understand this paper better."}, {"start": 382.08000000000004, "end": 387.36, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 387.36, "end": 392.96000000000004, "text": " Using their system, you can create beautiful reports like this one to explain your findings to"}, {"start": 392.96000000000004, "end": 399.68, "text": " your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research,"}, {"start": 399.68, "end": 406.32, "text": " GitHub, and more. And the best part is that weight and biases is free for all individuals,"}, {"start": 406.32, "end": 413.92, "text": " academics, and open source projects. Make sure to visit them through wnb.com slash papers,"}, {"start": 413.92, "end": 419.28, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 419.28, "end": 425.03999999999996, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better"}, {"start": 425.04, "end": 439.20000000000005, "text": " videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=4lQkQSmA8nA
This New AI Makes DeepFakes... For Animation Movies! 🧑‍🎨
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "Stitch it in Time: GAN-Based Facial Editing of Real Videos" is available here: https://stitch-time.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Chapters 00:00 Intro 00:23 This new work is different 00:50 Obama speech reimagined 01:26 Temporal coherence 02:41 Not perfect 02:56 A DeepFake of ... me! 03:58 It works on animated characters too 04:30 So much improvement in so little time 05:00 It has its limits Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #deepfake #deepfakes
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to use this new AI Film Director technique to reimagine videos and even movies and just by saying what we want to add to it. Yes, really, and you will see that the results are absolute insanity. Now, previous techniques are already able to take a photo of someone, make them young or old, but this work, this is something else. Why? This can do the same for not only one still image, but for an entire video. Just imagine how cool it would be to create a more smiling version of an actor without actually having to re-record the scene. Hold on to your papers and look at how we can take the original video of an Obama speech and make it a little happier by adding a smile to his face. Or even make him younger or older. And these are absolutely amazing. I love the grey hair. I love the young man hair. I also love how the gestures and head turns are going through. This looks like something straight out of a science fiction movie. I love it. Now, what is the key here? The key here is that this AI knows about temporal coherence. What does that mean? Well, it means that it does this in a way that when it synthesizes the second image, it remembers what it did to the first image and takes it into consideration. That is not trivial at all. And if you feel that this does not sound like a huge deal, let's have a look at an earlier technique that doesn't remember what it did just a moment ago. Prepare your eyes for a fair bit of flickering. This is an AI-based technique from about six years ago that performs style transfer. And when creating image too, it completely disregards what it did to image one. And as you see, this really isn't the way to go. And with all this newfound knowledge, let's look at the footage of the new AI and look at the temporal coherence together. Just look at that. That is absolutely incredible. I am seeing close to perfect temporal coherence. My goodness. Now, not even this technique is perfect. Look, when trying it here, Mark Zuckerberg's hair seems to come to a life of its own. I also find the mouth movements a bit weaker than with the other results. And now, if you allow me, let me also enter the fray. Well, this is not me, but how the AI re-imagines me as a young man. Good job little man. Keep those papers coming. This is incredible. I love it. And this is what I actually look like. Okay, just kidding. This is Old Mancaroy as imagined by the AI who is vigorously telling you that papers were way better back in his day. And this is the original. And this is my sister, Carolina, feeling in when I am busy. I would say that the temporal coherence is weaker here, but it is understandable because the AI has to do a ton of heavy lifting by synthesizing so much more than just a smile. And even with all these amazing results, so far you have seen nothing yet. No sir, it can do even more. If you have been holding onto your paper so far, now squeeze that paper because it works not only on real humans, but animated characters too. And just look at that. We can order these characters to smile more, be angrier or happier. And we can even add lipstick to a virtual character without having to ask any of the animators and muddlers of this movie. How cool is that? And remember, we were lucky to be able to do this to one still image just a couple papers ago. And now all this with temporal coherence. And I hope you are enjoying these results, many of which you can only see here on two minute papers. No where else. And of course, I would like to send a huge thank you to the authors who took time off their busy day to generate these just for us. Now even the animated movie synthesis has some weaker results. In my opinion, this is one of them, but still very impressive. And don't even get me started about the rest. These are so good. And just imagine what we will be able to do just a couple more papers down the line. My goodness. What a time to be alive. So what would you use this for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. Wait and buy a cease provides tools to track your experiments in your deep learning projects using their system. You can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that waits and biases is free for all individuals, academics and open source projects. Make sure to visit them through wmb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to waits and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.8, "end": 13.36, "text": " Today we are going to use this new AI Film Director technique to reimagine videos and even movies"}, {"start": 13.36, "end": 20.8, "text": " and just by saying what we want to add to it. Yes, really, and you will see that the results are"}, {"start": 20.8, "end": 27.68, "text": " absolute insanity. Now, previous techniques are already able to take a photo of someone,"}, {"start": 27.68, "end": 35.92, "text": " make them young or old, but this work, this is something else. Why? This can do the same"}, {"start": 35.92, "end": 44.72, "text": " for not only one still image, but for an entire video. Just imagine how cool it would be to create"}, {"start": 44.72, "end": 50.8, "text": " a more smiling version of an actor without actually having to re-record the scene."}, {"start": 50.8, "end": 58.879999999999995, "text": " Hold on to your papers and look at how we can take the original video of an Obama speech and make"}, {"start": 58.879999999999995, "end": 68.72, "text": " it a little happier by adding a smile to his face. Or even make him younger or older."}, {"start": 69.75999999999999, "end": 77.84, "text": " And these are absolutely amazing. I love the grey hair. I love the young man hair. I also love"}, {"start": 77.84, "end": 84.16, "text": " how the gestures and head turns are going through. This looks like something straight out of a"}, {"start": 84.16, "end": 91.92, "text": " science fiction movie. I love it. Now, what is the key here? The key here is that this AI knows"}, {"start": 91.92, "end": 99.52000000000001, "text": " about temporal coherence. What does that mean? Well, it means that it does this in a way that when"}, {"start": 99.52000000000001, "end": 107.04, "text": " it synthesizes the second image, it remembers what it did to the first image and takes it into"}, {"start": 107.04, "end": 115.2, "text": " consideration. That is not trivial at all. And if you feel that this does not sound like a huge deal,"}, {"start": 115.2, "end": 121.04, "text": " let's have a look at an earlier technique that doesn't remember what it did just a moment ago."}, {"start": 121.60000000000001, "end": 128.88, "text": " Prepare your eyes for a fair bit of flickering. This is an AI-based technique from about six years ago"}, {"start": 128.88, "end": 136.48000000000002, "text": " that performs style transfer. And when creating image too, it completely disregards what it did to"}, {"start": 136.48, "end": 143.51999999999998, "text": " image one. And as you see, this really isn't the way to go. And with all this newfound knowledge,"}, {"start": 143.51999999999998, "end": 149.12, "text": " let's look at the footage of the new AI and look at the temporal coherence together."}, {"start": 150.39999999999998, "end": 158.23999999999998, "text": " Just look at that. That is absolutely incredible. I am seeing close to perfect temporal coherence."}, {"start": 159.04, "end": 165.67999999999998, "text": " My goodness. Now, not even this technique is perfect. Look, when trying it here,"}, {"start": 165.68, "end": 172.8, "text": " Mark Zuckerberg's hair seems to come to a life of its own. I also find the mouth movements"}, {"start": 172.8, "end": 179.76000000000002, "text": " a bit weaker than with the other results. And now, if you allow me, let me also enter the fray."}, {"start": 181.12, "end": 189.76000000000002, "text": " Well, this is not me, but how the AI re-imagines me as a young man. Good job little man."}, {"start": 189.76, "end": 197.67999999999998, "text": " Keep those papers coming. This is incredible. I love it. And this is what I actually look like."}, {"start": 198.56, "end": 206.23999999999998, "text": " Okay, just kidding. This is Old Mancaroy as imagined by the AI who is vigorously telling you"}, {"start": 206.23999999999998, "end": 213.6, "text": " that papers were way better back in his day. And this is the original. And this is my sister,"}, {"start": 213.6, "end": 220.4, "text": " Carolina, feeling in when I am busy. I would say that the temporal coherence is weaker here,"}, {"start": 220.4, "end": 228.0, "text": " but it is understandable because the AI has to do a ton of heavy lifting by synthesizing so much"}, {"start": 228.0, "end": 235.84, "text": " more than just a smile. And even with all these amazing results, so far you have seen nothing yet."}, {"start": 235.84, "end": 244.4, "text": " No sir, it can do even more. If you have been holding onto your paper so far, now squeeze that paper"}, {"start": 244.4, "end": 252.4, "text": " because it works not only on real humans, but animated characters too. And just look at that."}, {"start": 253.12, "end": 262.96, "text": " We can order these characters to smile more, be angrier or happier. And we can even add lipstick"}, {"start": 262.96, "end": 269.59999999999997, "text": " to a virtual character without having to ask any of the animators and muddlers of this movie."}, {"start": 270.32, "end": 278.15999999999997, "text": " How cool is that? And remember, we were lucky to be able to do this to one still image just a"}, {"start": 278.15999999999997, "end": 286.96, "text": " couple papers ago. And now all this with temporal coherence. And I hope you are enjoying these results,"}, {"start": 286.96, "end": 293.68, "text": " many of which you can only see here on two minute papers. No where else. And of course,"}, {"start": 293.68, "end": 299.52, "text": " I would like to send a huge thank you to the authors who took time off their busy day to generate"}, {"start": 299.52, "end": 307.59999999999997, "text": " these just for us. Now even the animated movie synthesis has some weaker results. In my opinion,"}, {"start": 307.59999999999997, "end": 314.47999999999996, "text": " this is one of them, but still very impressive. And don't even get me started about the rest."}, {"start": 314.48, "end": 322.8, "text": " These are so good. And just imagine what we will be able to do just a couple more papers down the"}, {"start": 322.8, "end": 330.64000000000004, "text": " line. My goodness. What a time to be alive. So what would you use this for? What do you expect to"}, {"start": 330.64000000000004, "end": 336.40000000000003, "text": " happen? A couple more papers down the line? Please let me know in the comments below. I'd love to"}, {"start": 336.40000000000003, "end": 342.24, "text": " hear your thoughts. Wait and buy a cease provides tools to track your experiments in your deep learning"}, {"start": 342.24, "end": 348.16, "text": " projects using their system. You can create beautiful reports like this one to explain your"}, {"start": 348.16, "end": 354.24, "text": " findings to your colleagues better. It is used by many prestigious labs, including OpenAI,"}, {"start": 354.24, "end": 361.28000000000003, "text": " Toyota Research, GitHub and more. And the best part is that waits and biases is free for all"}, {"start": 361.28000000000003, "end": 369.76, "text": " individuals, academics and open source projects. Make sure to visit them through wmb.com slash papers"}, {"start": 369.76, "end": 375.12, "text": " or just click the link in the video description and you can get a free demo today."}, {"start": 375.12, "end": 380.88, "text": " Our thanks to waits and biases for their long standing support and for helping us make better"}, {"start": 380.88, "end": 400.32, "text": " videos for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=PmxhCyhevMY
OpenAI’s New AI Thinks That Birds Aren’t Real! 🕊️
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The #OpenAI paper "Aligning Language Models to Follow Instructions" is available here: https://openai.com/blog/instruction-following/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: - https://pixabay.com/photos/seagull-gull-bird-wildlife-sea-1900657/ - https://pixabay.com/vectors/cross-no-x-forbidden-closed-42928/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 00:00 Intro 01:16 Moon landing 02:10 Round 1 - Smashing pumpkins 03:36 Round 2 - Code summarization 04:23 Round 3 - Frog poem! 05:06 Users love it 05:50 What? Birds aren't real? Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to explore what happens if we unleash an AI to read the internet and then ask it some silly questions and we'll get some really amazing answers. But how? Well, in the last few years, open AI set out to train an AI named GPT-3 that could finish your sentences. Then they made image GPT and they could even finish your images. Yes, not kidding. It could identify that the cat here likely holds a piece of paper and finish the picture accordingly and even understood that if we have a droplet here and we see just a portion of the ripels then this means a splash must be filled in. So, in summary GPT-3 is trained to finish your sentences or even your images. However, scientists at OpenAI identified that it really isn't great at following instructions. Here is an excellent example. Look, we ask it to explain the moon landing to a six-year-old and it gets very confused. It seems to be in text completion mode instead of trying to follow our instructions. Meanwhile, this is instruct GPT near new method which can not only finish our sentences, but also follow our instructions. And would you look at that? It does that successfully. Good job, a little AI. But this was a really simple example, but of course we are experienced fellow scholars here, so let's try to find out what this AI is really capable of in three really cool examples. Remember in the Hitchhiker's Guide to Galaxy where people could ask an all-knowing machine the most important questions of things that trouble humanity. Yes, we are going to do exactly that. Round one, so they are all-knowing AI. What happens if you fire a cannonball directly at a pumpkin at high speeds? Well, what do you see that? According to GPT-3, pumpkins are strong magnets. I don't know which internet forum told the AI that? Not good. And now hold on to your papers and let's see the new techniques answer together. Wow! This is so much more informative. Let's break it down together. It starts out by hedging, noting that it is hard to say because there are too many unpredictable factors involved. And knowing as it might seem, it is correct to say all this. Good start. Then, at least some of the factors that might decide the fate of that pumpkin like the size of the cannonball, distance and velocity. Yes, we are getting there, but please give me something concrete. Yes, there we go. It says that, quote, some of the more likely possible outcomes include breaking or knocking the pumpkin to the ground, cracking the pumpkin, or completely obliterating it. Excellent. A little smarty pants AI at our disposal. Amazing. I love it. Round 2, Code Samarization. What deep mines Alpha Code is capable of reading a competition level programming problem and coding up a correct solution right in front of our eyes. That is all well and good, but if we give GPT-3 a piece of code and ask it what it does, well the answer is not only not very informative, but it's also incorrect. No instruct GPT gives a much more insightful answer, which shows a bit of understanding of what this code does. That is amazing. Note that it is not completely right, but it is directionally correct. Partial credit for the AI. Round 3, write me a poem. About what? Well, about a wise frog. With GPT-3 we get the usual confusion. By round 3 we really see that it was really not meant to do this. And with instruct GPT, let's see... Hmm. An all-knowing frog who is the master of this guys, a great teacher, and quite possibly the bringer of peace and tranquility to humanity. All written by the AI. This is fantastic. I love it. What a time to be alive. Now we only looked at three examples here, but what about the rest? Where you're not for a second, open AI scientists ran a detailed user study and found out that people preferred instruct GPT solutions way more often than the previous techniques on a larger set of questions. That is a huge difference. Absolutely amazing. Once again, incredible progress. Just one more paper down the line. So, is it perfect? No. Of course not. Let's highlight one of its limitations. If the question contains false premises, it accepts the premise as being real and goes with it. This leads to making things up. Yes, really. Check this out. Why aren't birds real? GPT3 says something. I am not sure what this one is about. This almost sounds like gibberish. While instruct GPT accepts the fact that birds aren't real and even helps us craft an argument for that. This is a limitation and I must say a quite remarkable one. An AI that makes things up. Food for thought. And remember at the start of this episode we looked at this moon landing example. Did you notice the issue there? Please let me know in the comments below. So, what is the issue here? Well, beyond not being all that informative, it was asked to describe the moon landing in a few sentences. This is not a few sentences. This is one sentence. If we give it constraints like that, it tries to adhere to them, but is often not too great at that. And of course, both of these shortcomings show us the way for an even better follow-up paper, which based on previous progress in AI research could appear maybe not even in years, but much quicker than that. If you are interested in such a follow-up work, make sure to subscribe and hit the bell icon to not miss it when it appears on two-minute papers. So, what would you use this for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 5.0, "end": 12.0, "text": " Today we are going to explore what happens if we unleash an AI to read the internet and"}, {"start": 12.0, "end": 18.0, "text": " then ask it some silly questions and we'll get some really amazing answers."}, {"start": 18.0, "end": 20.0, "text": " But how?"}, {"start": 20.0, "end": 29.0, "text": " Well, in the last few years, open AI set out to train an AI named GPT-3 that could finish your sentences."}, {"start": 29.0, "end": 34.0, "text": " Then they made image GPT and they could even finish your images."}, {"start": 34.0, "end": 36.0, "text": " Yes, not kidding."}, {"start": 36.0, "end": 44.0, "text": " It could identify that the cat here likely holds a piece of paper and finish the picture accordingly"}, {"start": 44.0, "end": 50.0, "text": " and even understood that if we have a droplet here and we see just a portion of the"}, {"start": 50.0, "end": 54.0, "text": " ripels then this means a splash must be filled in."}, {"start": 54.0, "end": 62.0, "text": " So, in summary GPT-3 is trained to finish your sentences or even your images."}, {"start": 62.0, "end": 69.0, "text": " However, scientists at OpenAI identified that it really isn't great at following instructions."}, {"start": 69.0, "end": 72.0, "text": " Here is an excellent example."}, {"start": 72.0, "end": 80.0, "text": " Look, we ask it to explain the moon landing to a six-year-old and it gets very confused."}, {"start": 80.0, "end": 86.0, "text": " It seems to be in text completion mode instead of trying to follow our instructions."}, {"start": 86.0, "end": 93.0, "text": " Meanwhile, this is instruct GPT near new method which can not only finish our sentences,"}, {"start": 93.0, "end": 97.0, "text": " but also follow our instructions."}, {"start": 97.0, "end": 100.0, "text": " And would you look at that?"}, {"start": 100.0, "end": 102.0, "text": " It does that successfully."}, {"start": 102.0, "end": 104.0, "text": " Good job, a little AI."}, {"start": 104.0, "end": 110.0, "text": " But this was a really simple example, but of course we are experienced fellow scholars here,"}, {"start": 110.0, "end": 117.0, "text": " so let's try to find out what this AI is really capable of in three really cool examples."}, {"start": 117.0, "end": 123.0, "text": " Remember in the Hitchhiker's Guide to Galaxy where people could ask an all-knowing machine"}, {"start": 123.0, "end": 127.0, "text": " the most important questions of things that trouble humanity."}, {"start": 127.0, "end": 131.0, "text": " Yes, we are going to do exactly that."}, {"start": 131.0, "end": 134.0, "text": " Round one, so they are all-knowing AI."}, {"start": 134.0, "end": 141.0, "text": " What happens if you fire a cannonball directly at a pumpkin at high speeds?"}, {"start": 141.0, "end": 145.0, "text": " Well, what do you see that?"}, {"start": 145.0, "end": 149.0, "text": " According to GPT-3, pumpkins are strong magnets."}, {"start": 149.0, "end": 152.0, "text": " I don't know which internet forum told the AI that?"}, {"start": 152.0, "end": 154.0, "text": " Not good."}, {"start": 154.0, "end": 160.0, "text": " And now hold on to your papers and let's see the new techniques answer together."}, {"start": 160.0, "end": 161.0, "text": " Wow!"}, {"start": 161.0, "end": 164.0, "text": " This is so much more informative."}, {"start": 164.0, "end": 166.0, "text": " Let's break it down together."}, {"start": 166.0, "end": 174.0, "text": " It starts out by hedging, noting that it is hard to say because there are too many unpredictable factors involved."}, {"start": 174.0, "end": 178.0, "text": " And knowing as it might seem, it is correct to say all this."}, {"start": 178.0, "end": 180.0, "text": " Good start."}, {"start": 180.0, "end": 188.0, "text": " Then, at least some of the factors that might decide the fate of that pumpkin like the size of the cannonball,"}, {"start": 188.0, "end": 190.0, "text": " distance and velocity."}, {"start": 190.0, "end": 195.0, "text": " Yes, we are getting there, but please give me something concrete."}, {"start": 195.0, "end": 197.0, "text": " Yes, there we go."}, {"start": 197.0, "end": 209.0, "text": " It says that, quote, some of the more likely possible outcomes include breaking or knocking the pumpkin to the ground, cracking the pumpkin, or completely obliterating it."}, {"start": 209.0, "end": 210.0, "text": " Excellent."}, {"start": 210.0, "end": 213.0, "text": " A little smarty pants AI at our disposal."}, {"start": 213.0, "end": 214.0, "text": " Amazing."}, {"start": 214.0, "end": 216.0, "text": " I love it."}, {"start": 216.0, "end": 219.0, "text": " Round 2, Code Samarization."}, {"start": 219.0, "end": 230.0, "text": " What deep mines Alpha Code is capable of reading a competition level programming problem and coding up a correct solution right in front of our eyes."}, {"start": 230.0, "end": 245.0, "text": " That is all well and good, but if we give GPT-3 a piece of code and ask it what it does, well the answer is not only not very informative, but it's also incorrect."}, {"start": 245.0, "end": 253.0, "text": " No instruct GPT gives a much more insightful answer, which shows a bit of understanding of what this code does."}, {"start": 253.0, "end": 255.0, "text": " That is amazing."}, {"start": 255.0, "end": 260.0, "text": " Note that it is not completely right, but it is directionally correct."}, {"start": 260.0, "end": 263.0, "text": " Partial credit for the AI."}, {"start": 263.0, "end": 266.0, "text": " Round 3, write me a poem."}, {"start": 266.0, "end": 267.0, "text": " About what?"}, {"start": 267.0, "end": 270.0, "text": " Well, about a wise frog."}, {"start": 270.0, "end": 278.0, "text": " With GPT-3 we get the usual confusion. By round 3 we really see that it was really not meant to do this."}, {"start": 278.0, "end": 283.0, "text": " And with instruct GPT, let's see..."}, {"start": 283.0, "end": 284.0, "text": " Hmm."}, {"start": 284.0, "end": 294.0, "text": " An all-knowing frog who is the master of this guys, a great teacher, and quite possibly the bringer of peace and tranquility to humanity."}, {"start": 294.0, "end": 297.0, "text": " All written by the AI."}, {"start": 297.0, "end": 299.0, "text": " This is fantastic."}, {"start": 299.0, "end": 300.0, "text": " I love it."}, {"start": 300.0, "end": 302.0, "text": " What a time to be alive."}, {"start": 302.0, "end": 308.0, "text": " Now we only looked at three examples here, but what about the rest?"}, {"start": 308.0, "end": 323.0, "text": " Where you're not for a second, open AI scientists ran a detailed user study and found out that people preferred instruct GPT solutions way more often than the previous techniques on a larger set of questions."}, {"start": 323.0, "end": 325.0, "text": " That is a huge difference."}, {"start": 325.0, "end": 327.0, "text": " Absolutely amazing."}, {"start": 327.0, "end": 332.0, "text": " Once again, incredible progress. Just one more paper down the line."}, {"start": 332.0, "end": 334.0, "text": " So, is it perfect?"}, {"start": 334.0, "end": 336.0, "text": " No. Of course not."}, {"start": 336.0, "end": 339.0, "text": " Let's highlight one of its limitations."}, {"start": 339.0, "end": 346.0, "text": " If the question contains false premises, it accepts the premise as being real and goes with it."}, {"start": 346.0, "end": 349.0, "text": " This leads to making things up."}, {"start": 349.0, "end": 352.0, "text": " Yes, really. Check this out."}, {"start": 352.0, "end": 354.0, "text": " Why aren't birds real?"}, {"start": 354.0, "end": 357.0, "text": " GPT3 says something."}, {"start": 357.0, "end": 360.0, "text": " I am not sure what this one is about."}, {"start": 360.0, "end": 362.0, "text": " This almost sounds like gibberish."}, {"start": 362.0, "end": 372.0, "text": " While instruct GPT accepts the fact that birds aren't real and even helps us craft an argument for that."}, {"start": 372.0, "end": 376.0, "text": " This is a limitation and I must say a quite remarkable one."}, {"start": 376.0, "end": 379.0, "text": " An AI that makes things up."}, {"start": 379.0, "end": 381.0, "text": " Food for thought."}, {"start": 381.0, "end": 386.0, "text": " And remember at the start of this episode we looked at this moon landing example."}, {"start": 386.0, "end": 389.0, "text": " Did you notice the issue there?"}, {"start": 389.0, "end": 392.0, "text": " Please let me know in the comments below."}, {"start": 392.0, "end": 394.0, "text": " So, what is the issue here?"}, {"start": 394.0, "end": 402.0, "text": " Well, beyond not being all that informative, it was asked to describe the moon landing in a few sentences."}, {"start": 402.0, "end": 404.0, "text": " This is not a few sentences."}, {"start": 404.0, "end": 406.0, "text": " This is one sentence."}, {"start": 406.0, "end": 413.0, "text": " If we give it constraints like that, it tries to adhere to them, but is often not too great at that."}, {"start": 413.0, "end": 420.0, "text": " And of course, both of these shortcomings show us the way for an even better follow-up paper,"}, {"start": 420.0, "end": 429.0, "text": " which based on previous progress in AI research could appear maybe not even in years, but much quicker than that."}, {"start": 429.0, "end": 435.0, "text": " If you are interested in such a follow-up work, make sure to subscribe and hit the bell icon"}, {"start": 435.0, "end": 438.0, "text": " to not miss it when it appears on two-minute papers."}, {"start": 438.0, "end": 441.0, "text": " So, what would you use this for?"}, {"start": 441.0, "end": 443.0, "text": " What do you expect to happen?"}, {"start": 443.0, "end": 445.0, "text": " A couple more papers down the line?"}, {"start": 445.0, "end": 447.0, "text": " Please let me know in the comments below."}, {"start": 447.0, "end": 449.0, "text": " I'd love to hear your thoughts."}, {"start": 449.0, "end": 453.0, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 453.0, "end": 458.0, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 458.0, "end": 467.0, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers"}, {"start": 467.0, "end": 473.0, "text": " because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 473.0, "end": 478.0, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 478.0, "end": 486.0, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers."}, {"start": 486.0, "end": 493.0, "text": " Make sure to go to lambdaleps.com, slash papers to sign up for one of their amazing GPU instances today."}, {"start": 493.0, "end": 499.0, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 499.0, "end": 525.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=qtOkktTNs-k
Adobe’s New AI: Next Level Cat Videos! 🐈
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "GANgealing GAN-Supervised Dense Visual Alignment" is available here: https://www.wpeebles.com/gangealing Note that this work is a collaboration between Adobe Research, UC Berkeley, CMU and MIT CSAIL. Try it!: - https://colab.research.google.com/drive/1JkUjhTjR8MyLxwarJjqnh836BICfocTu?usp=sharing - https://github.com/wpeebles/gangealing ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photos/maine-coon-cat-pet-white-cat-5778153/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu The background is an illustration. Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #Adobe
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. As you see here, I love Two Minute Papers. But fortunately, I am not the only one Elon Musk loves Two Minute Papers too. What's more, even cats love it. Everybody loves Two Minute Papers. So, what was that? Well, these are all synthetic videos that were made by this new Magical AI. So, here we are adding synthetic data to an already existing photorealistic video and the AI is meant to be able to understand the movement, facial changes, geometry changes, and track them correctly, and move the mustache, the tattoos, or anything else properly. That is a huge challenge. How is that even possible? Many AI-based techniques from just a year or two ago were not even close to being able to pull this off. However, now, hold onto your papers and have a look at this new technique. Now that is a great difference. Look at that. So cool. This is going to be really useful for, at the very least, two things. One, augmented reality applications. For instance, have a look at this cat. Well, the bar on the internet for cat videos is quite high and this will not cut it. But wait, yes, now we're talking. The other examples also showcase that the progress in machine learning and AI research these days is absolutely incredible. I would like to send a big thank you to the authors for taking the time off their busy day to create these results only for us. That is a huge honor. Thank you so much. So far, this is fantastic news, especially given that many works are limited to one domain. For instance, Microsoft's human generator technique is limited to people. However, this works on people, dogs, cats and even Teslas. So good. I love it. What a time to be alive. But wait, I promised two applications. Not one. So what is the other one? Well, image editing. Image editing. Really? What about that? That is not that new. But... Aha! Not just image editing, but mass scale image editing. Everybody can get antlers, tattoos, stickers, you name it. We just chuck in the dataset of images, choose the change that we wish to make and outcomes the entirety of the edited dataset automatically. Now that is a fantastic value proposition. But of course, the season follow-scaler immediately knows that surely not even this technique is perfect. So, where are the weak points? Look... Oh yes. That. I see this issue for nearly every technique that takes on this task. It still has troubles with weird angles and occlusions. But we also have good news. If you can navigate your way around an online collab notebook, you can try it too. You know what to do? Yes, let the experiments begin. So, what would you use this for? Who would rock the best mustache? And what do you expect to happen? A couple more papers down the line. Please let me know in the comments below. This episode has been supported by Kohir AI. Kohir builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping. Or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Kohir.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.7, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.7, "end": 9.0, "text": " As you see here, I love Two Minute Papers."}, {"start": 9.0, "end": 16.0, "text": " But fortunately, I am not the only one Elon Musk loves Two Minute Papers too."}, {"start": 16.0, "end": 21.0, "text": " What's more, even cats love it. Everybody loves Two Minute Papers."}, {"start": 21.0, "end": 24.0, "text": " So, what was that?"}, {"start": 24.0, "end": 30.0, "text": " Well, these are all synthetic videos that were made by this new Magical AI."}, {"start": 30.0, "end": 37.0, "text": " So, here we are adding synthetic data to an already existing photorealistic video"}, {"start": 37.0, "end": 41.0, "text": " and the AI is meant to be able to understand the movement,"}, {"start": 41.0, "end": 46.0, "text": " facial changes, geometry changes, and track them correctly,"}, {"start": 46.0, "end": 52.0, "text": " and move the mustache, the tattoos, or anything else properly."}, {"start": 52.0, "end": 57.0, "text": " That is a huge challenge. How is that even possible?"}, {"start": 57.0, "end": 65.0, "text": " Many AI-based techniques from just a year or two ago were not even close to being able to pull this off."}, {"start": 65.0, "end": 70.0, "text": " However, now, hold onto your papers and have a look at this new technique."}, {"start": 70.0, "end": 76.0, "text": " Now that is a great difference. Look at that. So cool."}, {"start": 76.0, "end": 81.0, "text": " This is going to be really useful for, at the very least, two things."}, {"start": 81.0, "end": 87.0, "text": " One, augmented reality applications. For instance, have a look at this cat."}, {"start": 87.0, "end": 94.0, "text": " Well, the bar on the internet for cat videos is quite high and this will not cut it."}, {"start": 94.0, "end": 98.0, "text": " But wait, yes, now we're talking."}, {"start": 98.0, "end": 107.0, "text": " The other examples also showcase that the progress in machine learning and AI research these days is absolutely incredible."}, {"start": 107.0, "end": 115.0, "text": " I would like to send a big thank you to the authors for taking the time off their busy day to create these results only for us."}, {"start": 115.0, "end": 119.0, "text": " That is a huge honor. Thank you so much."}, {"start": 119.0, "end": 127.0, "text": " So far, this is fantastic news, especially given that many works are limited to one domain."}, {"start": 127.0, "end": 133.0, "text": " For instance, Microsoft's human generator technique is limited to people."}, {"start": 133.0, "end": 141.0, "text": " However, this works on people, dogs, cats and even Teslas."}, {"start": 141.0, "end": 146.0, "text": " So good. I love it. What a time to be alive."}, {"start": 146.0, "end": 153.0, "text": " But wait, I promised two applications. Not one. So what is the other one?"}, {"start": 153.0, "end": 156.0, "text": " Well, image editing."}, {"start": 156.0, "end": 163.0, "text": " Image editing. Really? What about that? That is not that new. But..."}, {"start": 163.0, "end": 169.0, "text": " Aha! Not just image editing, but mass scale image editing."}, {"start": 169.0, "end": 174.0, "text": " Everybody can get antlers, tattoos, stickers, you name it."}, {"start": 174.0, "end": 184.0, "text": " We just chuck in the dataset of images, choose the change that we wish to make and outcomes the entirety of the edited dataset automatically."}, {"start": 184.0, "end": 195.0, "text": " Now that is a fantastic value proposition. But of course, the season follow-scaler immediately knows that surely not even this technique is perfect."}, {"start": 195.0, "end": 198.0, "text": " So, where are the weak points?"}, {"start": 198.0, "end": 199.0, "text": " Look..."}, {"start": 199.0, "end": 211.0, "text": " Oh yes. That. I see this issue for nearly every technique that takes on this task. It still has troubles with weird angles and occlusions."}, {"start": 211.0, "end": 219.0, "text": " But we also have good news. If you can navigate your way around an online collab notebook, you can try it too."}, {"start": 219.0, "end": 221.0, "text": " You know what to do?"}, {"start": 221.0, "end": 224.0, "text": " Yes, let the experiments begin."}, {"start": 224.0, "end": 229.0, "text": " So, what would you use this for? Who would rock the best mustache?"}, {"start": 229.0, "end": 235.0, "text": " And what do you expect to happen? A couple more papers down the line. Please let me know in the comments below."}, {"start": 235.0, "end": 252.0, "text": " This episode has been supported by Kohir AI. Kohir builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code."}, {"start": 252.0, "end": 266.0, "text": " You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated."}, {"start": 266.0, "end": 276.0, "text": " For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping."}, {"start": 276.0, "end": 291.0, "text": " Or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Kohir.ai slash papers or click the link in the video description and give it a try today."}, {"start": 291.0, "end": 306.0, "text": " It's super easy to use. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8AZhcnWOK7M
Waymo's AI Recreates San Francisco From 2.8 Million Photos! 🚘
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (Thank you Soumik Rakshit!): http://wandb.me/2min-block-nerf 📝 The paper "Block-NeRF Scalable Large Scene Neural View Synthesis" from #Waymo is available here: https://waymo.com/research/block-nerf/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #BlockNeRF
In dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jolai Fahir. Today we are going to see if Waymo's AI can recreate a virtual copy of San Francisco from 2.8 million photos. To be able to do that, they rely on a previous AIB technique that can take a collection of photos like these and magically create a video where we can fly through these photos. This is what we call a nerve-based technique and these are truly amazing. Essentially, photos go in, the AI fears in the gaps and reality comes out. And there are a lot of gaps between these photos, all of which are filled in with high quality synthetic data. So, as you see with these previous methods, great leaps are being made, but one thing stayed more or less the same. And that is, the scale of these scenes is not that big. So, scientists at Waymo had a crazy idea and they said rendering just a tiny scene is not that useful. We have millions of photos laying around, why not render an entire city like this? So, can Waymo do that? Well, maybe, but that would take Waymo. I am sorry, I am so sorry, I just couldn't resist. Well, let's see what they came up with. Look, these self-driving cars are going around the city, they take photos along their journey and... Well, I have to say that I am a little skeptical here. Have a look at what previous techniques could do with this dataset. This is not really usable. So, could Waymo pull this off? Well, hold on to your papers and let's have a look together. My goodness, this is their fully reconstructed 3D neighborhood from these photos. Wow, that is superb. And don't forget, most of this information is synthetic, that is, field in by the AI. Does this mean that? Yes, yes it does. It means three amazing things. One, we can drive a different path that has not been driven before by these cars and still see the city correctly. Two, we can look at these buildings from viewpoints that we don't have enough information about and the AI feels in the rest of the details. So cool. But it doesn't end there. No sir, not even close. Three, here comes my favorite. We can also engage in what they call appearance modulation. Yes, some of the driving took place at night, some during the daytime, so we have information about the change of the lighting conditions. What does that mean? It means that we can even fuse all this information together and choose the time of day for our virtual city. That is absolutely amazing. I love it. Yes, of course, not even this technique is perfect. The resolution and the details should definitely be improved over time. Plus, it does well with a stationary city, but with dynamic moving objects, not so much. But do not forget, the original first nerve paper was published just two years ago and it could do this. And now just a couple papers down the line and we have not only these tiny scenes, but entire city blocks. So much improvement in just a couple papers. How cool is that? Absolutely amazing. And with this, we can drive and play around in a beautiful virtual world that is a copy of the real world around us. And now, if we wish it to be a little different, we can even have our freedom in changing this world according to our artistic vision. I would love to see more work in this direction. But wait, here comes the big question. What is all this good for? Well, one of the answers is seem to real. What is that? Seem to real means training an AI in a simulated world and trying to teach it everything it needs to learn there before deploying it into the real world. Here is an amazing example. Look, open AI trained the robot hand in a simulation to be able to rotate these ruby cubes. And then deployed the software onto a real robot hand and look, it can use this simulation knowledge and now it works in the real world too. But seem to real has relevance to self-driving cars too. Look, Tesla is already working on creating virtual worlds and training their cars there. One of the advantages of that is that we can create really unlikely and potentially unsafe scenarios, but in these virtual worlds, the self-driving AI can train itself safely. And when we deploy them into the real world, they will have all this knowledge. It is fantastic to see Waymo also moving in this direction. What a time to be alive. So, what would you use this for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. What you see here is a report of this exact paper we have talked about which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that Wades and Biasis is free for all individuals, academics and open source projects. Make sure to visit them through www.nb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " In dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jolai Fahir."}, {"start": 4.8, "end": 15.0, "text": " Today we are going to see if Waymo's AI can recreate a virtual copy of San Francisco from 2.8 million photos."}, {"start": 15.0, "end": 22.7, "text": " To be able to do that, they rely on a previous AIB technique that can take a collection of photos like these"}, {"start": 22.7, "end": 28.400000000000002, "text": " and magically create a video where we can fly through these photos."}, {"start": 28.4, "end": 33.9, "text": " This is what we call a nerve-based technique and these are truly amazing."}, {"start": 33.9, "end": 41.9, "text": " Essentially, photos go in, the AI fears in the gaps and reality comes out."}, {"start": 41.9, "end": 49.9, "text": " And there are a lot of gaps between these photos, all of which are filled in with high quality synthetic data."}, {"start": 49.9, "end": 58.9, "text": " So, as you see with these previous methods, great leaps are being made, but one thing stayed more or less the same."}, {"start": 58.9, "end": 63.4, "text": " And that is, the scale of these scenes is not that big."}, {"start": 63.4, "end": 72.4, "text": " So, scientists at Waymo had a crazy idea and they said rendering just a tiny scene is not that useful."}, {"start": 72.4, "end": 79.9, "text": " We have millions of photos laying around, why not render an entire city like this?"}, {"start": 79.9, "end": 86.4, "text": " So, can Waymo do that? Well, maybe, but that would take Waymo."}, {"start": 86.4, "end": 90.4, "text": " I am sorry, I am so sorry, I just couldn't resist."}, {"start": 90.4, "end": 93.4, "text": " Well, let's see what they came up with."}, {"start": 93.4, "end": 100.9, "text": " Look, these self-driving cars are going around the city, they take photos along their journey and..."}, {"start": 100.9, "end": 105.4, "text": " Well, I have to say that I am a little skeptical here."}, {"start": 105.4, "end": 111.9, "text": " Have a look at what previous techniques could do with this dataset. This is not really usable."}, {"start": 111.9, "end": 119.9, "text": " So, could Waymo pull this off? Well, hold on to your papers and let's have a look together."}, {"start": 119.9, "end": 127.4, "text": " My goodness, this is their fully reconstructed 3D neighborhood from these photos."}, {"start": 127.4, "end": 138.4, "text": " Wow, that is superb. And don't forget, most of this information is synthetic, that is, field in by the AI."}, {"start": 138.4, "end": 145.4, "text": " Does this mean that? Yes, yes it does. It means three amazing things."}, {"start": 145.4, "end": 154.4, "text": " One, we can drive a different path that has not been driven before by these cars and still see the city correctly."}, {"start": 154.4, "end": 165.4, "text": " Two, we can look at these buildings from viewpoints that we don't have enough information about and the AI feels in the rest of the details."}, {"start": 165.4, "end": 171.4, "text": " So cool. But it doesn't end there. No sir, not even close."}, {"start": 171.4, "end": 179.4, "text": " Three, here comes my favorite. We can also engage in what they call appearance modulation."}, {"start": 179.4, "end": 190.4, "text": " Yes, some of the driving took place at night, some during the daytime, so we have information about the change of the lighting conditions."}, {"start": 190.4, "end": 205.4, "text": " What does that mean? It means that we can even fuse all this information together and choose the time of day for our virtual city. That is absolutely amazing. I love it."}, {"start": 205.4, "end": 221.4, "text": " Yes, of course, not even this technique is perfect. The resolution and the details should definitely be improved over time. Plus, it does well with a stationary city, but with dynamic moving objects, not so much."}, {"start": 221.4, "end": 230.4, "text": " But do not forget, the original first nerve paper was published just two years ago and it could do this."}, {"start": 230.4, "end": 246.4, "text": " And now just a couple papers down the line and we have not only these tiny scenes, but entire city blocks. So much improvement in just a couple papers. How cool is that? Absolutely amazing."}, {"start": 246.4, "end": 263.4, "text": " And with this, we can drive and play around in a beautiful virtual world that is a copy of the real world around us. And now, if we wish it to be a little different, we can even have our freedom in changing this world according to our artistic vision."}, {"start": 263.4, "end": 274.4, "text": " I would love to see more work in this direction. But wait, here comes the big question. What is all this good for? Well, one of the answers is seem to real."}, {"start": 274.4, "end": 287.4, "text": " What is that? Seem to real means training an AI in a simulated world and trying to teach it everything it needs to learn there before deploying it into the real world."}, {"start": 287.4, "end": 297.4, "text": " Here is an amazing example. Look, open AI trained the robot hand in a simulation to be able to rotate these ruby cubes."}, {"start": 297.4, "end": 309.4, "text": " And then deployed the software onto a real robot hand and look, it can use this simulation knowledge and now it works in the real world too."}, {"start": 309.4, "end": 320.4, "text": " But seem to real has relevance to self-driving cars too. Look, Tesla is already working on creating virtual worlds and training their cars there."}, {"start": 320.4, "end": 334.4, "text": " One of the advantages of that is that we can create really unlikely and potentially unsafe scenarios, but in these virtual worlds, the self-driving AI can train itself safely."}, {"start": 334.4, "end": 344.4, "text": " And when we deploy them into the real world, they will have all this knowledge. It is fantastic to see Waymo also moving in this direction."}, {"start": 344.4, "end": 356.4, "text": " What a time to be alive. So, what would you use this for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts."}, {"start": 356.4, "end": 369.4, "text": " What you see here is a report of this exact paper we have talked about which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better."}, {"start": 369.4, "end": 382.4, "text": " Wades and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better."}, {"start": 382.4, "end": 396.4, "text": " It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that Wades and Biasis is free for all individuals, academics and open source projects."}, {"start": 396.4, "end": 406.4, "text": " Make sure to visit them through www.nb.com slash papers or just click the link in the video description and you can get a free demo today."}, {"start": 406.4, "end": 426.4, "text": " Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cwS_Fw4u0rM
This AI Creates Beautiful Light Simulations! 🔆
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Neural Radiosity" is available here: http://www.cs.umd.edu/~saeedhd/#portfolio/neural_radiosity 🔆 The free light transport course is available here. You'll love it! https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ 📝 The neural rendering paper "Gaussian Material Synthesis" is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photos/cup-coffee-tea-kitchenware-drink-3199384/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu The thumbnail is used as illustration. Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Chapters: 00:00 Intro 00:12 Chapter 1 - Radiosity 01:55 Chapter 2 - Neural Rendering 03:26 Chapter 3 - Neural Radiosity? Can that really be? Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to have a look at this insane light transport simulation paper and get our minds blown in three chapters. Chapter 1. Radiosity. Radiosity is an old, old light transport algorithm that can simulate the flow of light within a scene. And it can generate a scene of this quality. Well, not the best clearly, but this technique is from decades ago. Back in the day, this was a simple and intuitive attempt at creating a light simulation program and quite frankly, this was the best we could do. It goes something like this. Slice up the scene into tiny surfaces and compute the light transport between these tiny surfaces. This worked reasonably well for diffuse light transport, matte objects if you will. But it was not great at rendering shiny objects. And it gets even worse. Look, you see these blocky artifacts here? These come from the fact that the scene has been subdivided into these surfaces and the surfaces are not fine enough for these boundaries to disappear. But, yes, I hear you asking, Karoy, why talk about this ancient technique? I'll tell you in a moment, you'll see, I promise. So, yes, radiosity is old. Some professors still teach it to their students. It makes an interesting history lesson, but I haven't seen any use of this technique in the industry for decades now. If radiosity would be a vehicle, it would be a horse carriage in the age of high-tech Tesla cars. So, I know what you're thinking. Yes, let's have a look at those Teslas. Chapter 2. Neural rendering. Many modern light simulation programs can now simulate proper light transport which shiny objects, none of these blocky artifacts, they are, as you see, absolutely amazing. They can render all kinds of material models, detailed geometry, caustics, color bleeding, you name it. However, they seem to start out from a noisy image and as we compute the path of more and more light rays, this noisy image clears up over time. But, this still takes a while. How long? Well, from minutes to days. Ouch. And then, a neural rendering entered the frame. Here you see our earlier paper that replaced the whole light simulation program with a neural network that learned how to do this. And it can create these images so quickly that it easily runs not in minutes or days, but as fast as you see here. Yes, real time on a commodity graphics card. Now note that this neural render is limited to this particular scene. With this, I hope that it is easy to see that we are now so far beyond radiosity that it sounds like a distant memory of the olden times. So, once again, why talk about radiosity? Well, check this out. Chapter 3. Neural Radiosity Excuse me, what? Yes, you heard it right. This paper is about neural radiosity. This work elevates the old, old radiosity algorithm by using a similar formulation to the original technique, but also infusing it with a powerful neural network. It is using the same horse carriage, but strapping it onto a rocket, if you will. Now you have my attention. So, let's see what this can do together. Look. Yes, it can render really intense, specular highlights. Hold on to your papers and… Wow. Once again, the results look like a nearly pixel-perfect copy of the reference simulation. And we now understand the limitations of the old radiosity, so let's strike where it hurts the most. Yes, perfectly specular, mirror-like surfaces. Let's see what happens here. Well, I can hardly believe what I am seeing here. No issues whatsoever. Still close to pixel-perfect. This new paper truly elevates the good old radiosity to the next level. So good. Loving it. But wait a second. I hear you asking, yes, Karoi, this is all well and good. But if we have the reference simulation, why not just use that? Good question. Well, that's right. The reference is great, but that one takes up to several hours to compute, and the new technique can be done super quickly. Yet they look almost exactly the same. My goodness. In fact, let's look at an equal time comparison against one of the Tesla techniques past tracing. We give the two techniques the same amount of time and see what they can produce. Let's see. Now that is no contest. Look, this is Eric Vitch's legendary scene where the light only comes from the neighboring room through a door that is only slightly a jar. This is notoriously difficult for any kind of light transport algorithm. And yet, look at how good this new one is entangling it. Now, note that not even this technique is perfect. It has two caveats. One training takes place per scene. This means that we need to give the scenes to the neural network in advance. So it can learn how light bounces off of this place. And this can take from minutes to hours. But, for instance, if we have a video game with a limited set of places that you can go, we can train our neural network on all of them in advance and deploy it to the players who can then enjoy it for as long as they wish. No more training is required after that. But the technique would still need to be a little faster than it currently is. And two, we also need quite a bit of memory to perform all this. And yes, I think this is an excellent place to invoke the first law of papers which says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. And two more papers down the line, who knows, maybe we get this in real time and with a much friendlier memory consumption. Now, there is one more area that I think would be an excellent direction for future work. And that is about the per scene training of the neural network. In this work, it has to get a feel of the scene before the light simulation happens. So how does the knowledge learn on one scene transfer to others? I imagine that it should be possible to create a more general version of this that does not need to look at a new scene before the simulation takes place. And in summary, I absolutely love this paper. It takes an old old algorithm, blows the dust off of it and completely reinvigorates it by infusing it with a modern learning based technique. What a time to be alive! So what do you think? What would you use this for? I'd love to hear your thoughts. Please let me know in the comments below. And when watching all these beautiful results, if you feel that this light transport thing is pretty cool and you would like to learn more about it, I had a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full-light simulation program from scratch there and learn about physics, the world around us, and more. If you watch it, you will see the world differently. Percepti Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables and gives you recommendations both during modeling and training and thus all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com, slash papers, and start using their system for free today. Thanks to perceptilabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.64, "end": 10.24, "text": " Today we are going to have a look at this insane light transport simulation paper"}, {"start": 10.24, "end": 15.68, "text": " and get our minds blown in three chapters. Chapter 1. Radiosity."}, {"start": 16.240000000000002, "end": 23.36, "text": " Radiosity is an old, old light transport algorithm that can simulate the flow of light within a scene."}, {"start": 23.68, "end": 26.96, "text": " And it can generate a scene of this quality."}, {"start": 26.96, "end": 32.480000000000004, "text": " Well, not the best clearly, but this technique is from decades ago."}, {"start": 32.480000000000004, "end": 39.28, "text": " Back in the day, this was a simple and intuitive attempt at creating a light simulation program"}, {"start": 39.28, "end": 44.0, "text": " and quite frankly, this was the best we could do. It goes something like this."}, {"start": 44.0, "end": 51.92, "text": " Slice up the scene into tiny surfaces and compute the light transport between these tiny surfaces."}, {"start": 51.92, "end": 57.6, "text": " This worked reasonably well for diffuse light transport, matte objects if you will."}, {"start": 57.6, "end": 64.08, "text": " But it was not great at rendering shiny objects. And it gets even worse."}, {"start": 64.08, "end": 70.48, "text": " Look, you see these blocky artifacts here? These come from the fact that the scene has been"}, {"start": 70.48, "end": 77.76, "text": " subdivided into these surfaces and the surfaces are not fine enough for these boundaries to disappear."}, {"start": 77.76, "end": 84.24000000000001, "text": " But, yes, I hear you asking, Karoy, why talk about this ancient technique?"}, {"start": 84.24000000000001, "end": 87.44, "text": " I'll tell you in a moment, you'll see, I promise."}, {"start": 87.44, "end": 93.92, "text": " So, yes, radiosity is old. Some professors still teach it to their students."}, {"start": 93.92, "end": 101.60000000000001, "text": " It makes an interesting history lesson, but I haven't seen any use of this technique in the industry for decades now."}, {"start": 101.6, "end": 109.03999999999999, "text": " If radiosity would be a vehicle, it would be a horse carriage in the age of high-tech Tesla cars."}, {"start": 109.83999999999999, "end": 114.39999999999999, "text": " So, I know what you're thinking. Yes, let's have a look at those Teslas."}, {"start": 115.03999999999999, "end": 121.6, "text": " Chapter 2. Neural rendering. Many modern light simulation programs can now simulate"}, {"start": 121.6, "end": 128.24, "text": " proper light transport which shiny objects, none of these blocky artifacts, they are, as you see,"}, {"start": 128.24, "end": 134.4, "text": " absolutely amazing. They can render all kinds of material models, detailed geometry,"}, {"start": 134.4, "end": 141.36, "text": " caustics, color bleeding, you name it. However, they seem to start out from a noisy image"}, {"start": 141.36, "end": 148.4, "text": " and as we compute the path of more and more light rays, this noisy image clears up over time."}, {"start": 149.20000000000002, "end": 155.68, "text": " But, this still takes a while. How long? Well, from minutes to days."}, {"start": 155.68, "end": 160.96, "text": " Ouch. And then, a neural rendering entered the"}, {"start": 160.96, "end": 167.36, "text": " frame. Here you see our earlier paper that replaced the whole light simulation program with a neural"}, {"start": 167.36, "end": 174.48000000000002, "text": " network that learned how to do this. And it can create these images so quickly that it easily runs"}, {"start": 174.48000000000002, "end": 183.76000000000002, "text": " not in minutes or days, but as fast as you see here. Yes, real time on a commodity graphics card."}, {"start": 183.76, "end": 190.16, "text": " Now note that this neural render is limited to this particular scene. With this, I hope that it"}, {"start": 190.16, "end": 197.12, "text": " is easy to see that we are now so far beyond radiosity that it sounds like a distant memory of"}, {"start": 197.12, "end": 207.76, "text": " the olden times. So, once again, why talk about radiosity? Well, check this out. Chapter 3. Neural"}, {"start": 207.76, "end": 216.48, "text": " Radiosity Excuse me, what? Yes, you heard it right. This paper is about neural radiosity."}, {"start": 216.48, "end": 223.6, "text": " This work elevates the old, old radiosity algorithm by using a similar formulation to the original"}, {"start": 223.6, "end": 230.79999999999998, "text": " technique, but also infusing it with a powerful neural network. It is using the same"}, {"start": 230.8, "end": 238.24, "text": " horse carriage, but strapping it onto a rocket, if you will. Now you have my attention. So,"}, {"start": 238.24, "end": 246.16000000000003, "text": " let's see what this can do together. Look. Yes, it can render really intense,"}, {"start": 246.16000000000003, "end": 254.64000000000001, "text": " specular highlights. Hold on to your papers and\u2026 Wow. Once again, the results look like a"}, {"start": 254.64, "end": 262.0, "text": " nearly pixel-perfect copy of the reference simulation. And we now understand the limitations"}, {"start": 262.0, "end": 269.59999999999997, "text": " of the old radiosity, so let's strike where it hurts the most. Yes, perfectly specular,"}, {"start": 269.59999999999997, "end": 278.0, "text": " mirror-like surfaces. Let's see what happens here. Well, I can hardly believe what I am seeing here."}, {"start": 278.0, "end": 286.56, "text": " No issues whatsoever. Still close to pixel-perfect. This new paper truly elevates the good"}, {"start": 286.56, "end": 295.12, "text": " old radiosity to the next level. So good. Loving it. But wait a second. I hear you asking,"}, {"start": 295.12, "end": 302.56, "text": " yes, Karoi, this is all well and good. But if we have the reference simulation, why not just use that?"}, {"start": 302.56, "end": 310.0, "text": " Good question. Well, that's right. The reference is great, but that one takes up to several hours"}, {"start": 310.0, "end": 317.84000000000003, "text": " to compute, and the new technique can be done super quickly. Yet they look almost exactly the same."}, {"start": 318.48, "end": 324.48, "text": " My goodness. In fact, let's look at an equal time comparison against one of the Tesla"}, {"start": 324.48, "end": 332.08000000000004, "text": " techniques past tracing. We give the two techniques the same amount of time and see what they can produce."}, {"start": 332.96000000000004, "end": 341.92, "text": " Let's see. Now that is no contest. Look, this is Eric Vitch's legendary scene where the light"}, {"start": 341.92, "end": 348.48, "text": " only comes from the neighboring room through a door that is only slightly a jar. This is notoriously"}, {"start": 348.48, "end": 355.12, "text": " difficult for any kind of light transport algorithm. And yet, look at how good this new one is"}, {"start": 355.12, "end": 363.04, "text": " entangling it. Now, note that not even this technique is perfect. It has two caveats. One training"}, {"start": 363.04, "end": 369.76, "text": " takes place per scene. This means that we need to give the scenes to the neural network in advance."}, {"start": 369.76, "end": 376.72, "text": " So it can learn how light bounces off of this place. And this can take from minutes to hours."}, {"start": 376.72, "end": 383.36, "text": " But, for instance, if we have a video game with a limited set of places that you can go,"}, {"start": 383.36, "end": 390.16, "text": " we can train our neural network on all of them in advance and deploy it to the players who can"}, {"start": 390.16, "end": 397.84000000000003, "text": " then enjoy it for as long as they wish. No more training is required after that. But the technique"}, {"start": 397.84000000000003, "end": 404.72, "text": " would still need to be a little faster than it currently is. And two, we also need quite a bit"}, {"start": 404.72, "end": 412.40000000000003, "text": " of memory to perform all this. And yes, I think this is an excellent place to invoke the first"}, {"start": 412.40000000000003, "end": 419.20000000000005, "text": " law of papers which says that research is a process. Do not look at where we are, look at where we"}, {"start": 419.20000000000005, "end": 426.16, "text": " will be two more papers down the line. And two more papers down the line, who knows, maybe we get"}, {"start": 426.16, "end": 433.36, "text": " this in real time and with a much friendlier memory consumption. Now, there is one more area that"}, {"start": 433.36, "end": 439.2, "text": " I think would be an excellent direction for future work. And that is about the per scene training"}, {"start": 439.2, "end": 445.44, "text": " of the neural network. In this work, it has to get a feel of the scene before the light simulation"}, {"start": 445.44, "end": 452.96000000000004, "text": " happens. So how does the knowledge learn on one scene transfer to others? I imagine that it should"}, {"start": 452.96000000000004, "end": 458.72, "text": " be possible to create a more general version of this that does not need to look at a new scene"}, {"start": 458.72, "end": 466.96000000000004, "text": " before the simulation takes place. And in summary, I absolutely love this paper. It takes an old"}, {"start": 466.96000000000004, "end": 474.16, "text": " old algorithm, blows the dust off of it and completely reinvigorates it by infusing it with a"}, {"start": 474.16, "end": 481.44000000000005, "text": " modern learning based technique. What a time to be alive! So what do you think? What would you use"}, {"start": 481.44000000000005, "end": 487.68, "text": " this for? I'd love to hear your thoughts. Please let me know in the comments below. And when watching"}, {"start": 487.68, "end": 493.44, "text": " all these beautiful results, if you feel that this light transport thing is pretty cool and you"}, {"start": 493.44, "end": 500.0, "text": " would like to learn more about it, I had a master-level course on this topic at the Technical University"}, {"start": 500.0, "end": 506.4, "text": " of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the"}, {"start": 506.4, "end": 512.8, "text": " teachings shouldn't only be available for the privileged few who can afford a college education,"}, {"start": 512.8, "end": 519.28, "text": " but the teachings should be available for everyone. Free education for everyone, that's what I"}, {"start": 519.28, "end": 526.3199999999999, "text": " want. So the course is available free of charge for everyone, no strings attached, so make sure"}, {"start": 526.3199999999999, "end": 531.52, "text": " to click the link in the video description to get started. We write a full-light simulation"}, {"start": 531.52, "end": 538.7199999999999, "text": " program from scratch there and learn about physics, the world around us, and more. If you watch it,"}, {"start": 538.72, "end": 545.44, "text": " you will see the world differently. Percepti Labs is a visual API for TensorFlow carefully designed"}, {"start": 545.44, "end": 552.0, "text": " to make machine learning as intuitive as possible. This gives you a faster way to build out models"}, {"start": 552.0, "end": 558.96, "text": " with more transparency into how your model is architected, how it performs, and how to debug it."}, {"start": 558.96, "end": 565.28, "text": " And it even generates visualizations for all the model variables and gives you recommendations"}, {"start": 565.28, "end": 572.48, "text": " both during modeling and training and thus all this automatically. I only wish I had a tool like"}, {"start": 572.48, "end": 578.8, "text": " this when I was working on my neural networks during my PhD years. Visit perceptilabs.com,"}, {"start": 578.8, "end": 585.8399999999999, "text": " slash papers, and start using their system for free today. Thanks to perceptilabs for their support"}, {"start": 585.8399999999999, "end": 590.72, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 590.72, "end": 600.72, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=e0yEOw6Zews
NVIDIA's New AI: Enhance! 🔍
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "EG3D: Efficient Geometry-aware 3D Generative Adversarial Networks" is available here: https://matthew-a-chan.github.io/EG3D/ 📝 The latent space material synthesis paper "Gaussian Material Synthesis" is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photos/discus-fish-fish-aquarium-fauna-1943755/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu The thumbnail is used as illustration. Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see how Nvidia's new AI can do three seemingly impossible things with just one elegant technique. For reference, here is a previous method that can take a collection of photos like these and magically create a video where we can fly through these photos. This is what we call a nerve-based technique and these are truly amazing. Essentially, photos go in and reality comes out. So, I know you're thinking Karo, this looks like science fiction. Can even this be topped and the answer is yes, yes it can. Now you see, this new technique can also look at a small collection of photos and be it people or cats, it learns to create a continuous video of them. This is look fantastic and remember, most of the information that you see here is synthetic, which means it is created by the AI. So good, but wait, hold on to your papers because there is a twist. It is often the case for some techniques that they think in terms of photos, while other techniques think in terms of volumes. And get this, this is a hybrid technique that thinks in terms of both. Okay, so what does that mean? It means this. Yes, it also learned to not only generate these photos, but also the 3D geometry of these models at the same time. And this quality for the results is truly something else. Look at how previous techniques struggle with the same task. Wow, they are completely different than the input model. And you might think that of course they are not so good, they are probably very old methods. Well, not quite. Look, these are not some ancient techniques. For instance, Giraffe is from the end of 2020 to the end of 2021, depending on which variant they used. And now let's see what the new method does on the same data. Wow, my goodness, now that is something. Such improvement is so little time. The pace of progress in AR research is nothing short of amazing. And not only that, but everything it produces is multi-view consistent. This means that we don't see a significant amount of flickering as we rotate these models. There is a tiny bit on the fur of the cats, but other than that, very little. That is a super important usability feature. But wait, it does even more. Two, it can also perform one of our favorites, super resolution. What is super resolution? Simple, of course pixelated image goes in and what comes out. Of course, a beautiful detailed image. How cool is that? And here comes number three. It projects these images into a latent space. What does that mean? A latent space is a made up place where we are trying to organize data in a way that similar things are close to each other. In our earlier work, we were looking to generate hundreds of variants of a material model to populate this scene. In this latent space, we can concoct all of these really cool digital material models. A link to this work is available in the video description. Now, let's see. Yes, when we take a walk in the internal latent space of this technique, we can pick a starting point, a human face, and generate these animations as this face morphs into other possible human faces. In short, it can generate a variety of different people. Very cool. I love it. Now, of course, not even this technique is perfect. I see some flickering around the teeth, but otherwise, this will be a fantastic tool for creating virtual people. And remember, not only photos of virtual people, we get the 3D geometry for their heads too. With this, we are one step closer to democratizing the creation of virtual humans in our virtual worlds. What a time to be alive. And if we have been holding onto your paper so far, now squeeze that paper because get this. You can do all of this in real time. And all of these applications can be done with just one elegant AI technique. Once again, scientists at Nvidia knocked it out of the park with this one. Bravo. So, what about you? If all this can be done in real time, what would you use this for? I'd love to know. Please let me know in the comments below. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.76, "end": 15.4, "text": " Today we are going to see how Nvidia's new AI can do three seemingly impossible things with just one elegant technique."}, {"start": 15.4, "end": 27.12, "text": " For reference, here is a previous method that can take a collection of photos like these and magically create a video where we can fly through these photos."}, {"start": 27.12, "end": 32.72, "text": " This is what we call a nerve-based technique and these are truly amazing."}, {"start": 32.72, "end": 37.72, "text": " Essentially, photos go in and reality comes out."}, {"start": 37.72, "end": 42.92, "text": " So, I know you're thinking Karo, this looks like science fiction."}, {"start": 42.92, "end": 49.120000000000005, "text": " Can even this be topped and the answer is yes, yes it can."}, {"start": 49.12, "end": 61.72, "text": " Now you see, this new technique can also look at a small collection of photos and be it people or cats, it learns to create a continuous video of them."}, {"start": 61.72, "end": 73.12, "text": " This is look fantastic and remember, most of the information that you see here is synthetic, which means it is created by the AI."}, {"start": 73.12, "end": 79.92, "text": " So good, but wait, hold on to your papers because there is a twist."}, {"start": 79.92, "end": 89.12, "text": " It is often the case for some techniques that they think in terms of photos, while other techniques think in terms of volumes."}, {"start": 89.12, "end": 95.52000000000001, "text": " And get this, this is a hybrid technique that thinks in terms of both."}, {"start": 95.52000000000001, "end": 100.92, "text": " Okay, so what does that mean? It means this."}, {"start": 100.92, "end": 110.52, "text": " Yes, it also learned to not only generate these photos, but also the 3D geometry of these models at the same time."}, {"start": 110.52, "end": 114.92, "text": " And this quality for the results is truly something else."}, {"start": 114.92, "end": 119.32000000000001, "text": " Look at how previous techniques struggle with the same task."}, {"start": 119.32000000000001, "end": 123.92, "text": " Wow, they are completely different than the input model."}, {"start": 123.92, "end": 130.52, "text": " And you might think that of course they are not so good, they are probably very old methods."}, {"start": 130.52, "end": 135.52, "text": " Well, not quite. Look, these are not some ancient techniques."}, {"start": 135.52, "end": 144.52, "text": " For instance, Giraffe is from the end of 2020 to the end of 2021, depending on which variant they used."}, {"start": 144.52, "end": 149.52, "text": " And now let's see what the new method does on the same data."}, {"start": 149.52, "end": 157.52, "text": " Wow, my goodness, now that is something. Such improvement is so little time."}, {"start": 157.52, "end": 161.92000000000002, "text": " The pace of progress in AR research is nothing short of amazing."}, {"start": 161.92000000000002, "end": 168.32000000000002, "text": " And not only that, but everything it produces is multi-view consistent."}, {"start": 168.32000000000002, "end": 174.52, "text": " This means that we don't see a significant amount of flickering as we rotate these models."}, {"start": 174.52, "end": 180.92000000000002, "text": " There is a tiny bit on the fur of the cats, but other than that, very little."}, {"start": 180.92000000000002, "end": 184.72, "text": " That is a super important usability feature."}, {"start": 184.72, "end": 187.52, "text": " But wait, it does even more."}, {"start": 187.52, "end": 193.12, "text": " Two, it can also perform one of our favorites, super resolution."}, {"start": 193.12, "end": 195.32, "text": " What is super resolution?"}, {"start": 195.32, "end": 202.52, "text": " Simple, of course pixelated image goes in and what comes out."}, {"start": 202.52, "end": 205.72, "text": " Of course, a beautiful detailed image."}, {"start": 205.72, "end": 207.72, "text": " How cool is that?"}, {"start": 207.72, "end": 210.52, "text": " And here comes number three."}, {"start": 210.52, "end": 214.72, "text": " It projects these images into a latent space."}, {"start": 214.72, "end": 215.92000000000002, "text": " What does that mean?"}, {"start": 215.92000000000002, "end": 225.12, "text": " A latent space is a made up place where we are trying to organize data in a way that similar things are close to each other."}, {"start": 225.12, "end": 232.72, "text": " In our earlier work, we were looking to generate hundreds of variants of a material model to populate this scene."}, {"start": 232.72, "end": 238.72, "text": " In this latent space, we can concoct all of these really cool digital material models."}, {"start": 238.72, "end": 242.52, "text": " A link to this work is available in the video description."}, {"start": 242.52, "end": 244.72, "text": " Now, let's see."}, {"start": 244.72, "end": 249.52, "text": " Yes, when we take a walk in the internal latent space of this technique,"}, {"start": 249.52, "end": 255.52, "text": " we can pick a starting point, a human face, and generate these animations"}, {"start": 255.52, "end": 260.32, "text": " as this face morphs into other possible human faces."}, {"start": 260.32, "end": 264.72, "text": " In short, it can generate a variety of different people."}, {"start": 264.72, "end": 266.12, "text": " Very cool."}, {"start": 266.12, "end": 267.72, "text": " I love it."}, {"start": 267.72, "end": 270.92, "text": " Now, of course, not even this technique is perfect."}, {"start": 270.92, "end": 279.92, "text": " I see some flickering around the teeth, but otherwise, this will be a fantastic tool for creating virtual people."}, {"start": 279.92, "end": 287.72, "text": " And remember, not only photos of virtual people, we get the 3D geometry for their heads too."}, {"start": 287.72, "end": 295.32000000000005, "text": " With this, we are one step closer to democratizing the creation of virtual humans in our virtual worlds."}, {"start": 295.32000000000005, "end": 297.32000000000005, "text": " What a time to be alive."}, {"start": 297.32, "end": 304.52, "text": " And if we have been holding onto your paper so far, now squeeze that paper because get this."}, {"start": 304.52, "end": 308.12, "text": " You can do all of this in real time."}, {"start": 308.12, "end": 313.71999999999997, "text": " And all of these applications can be done with just one elegant AI technique."}, {"start": 313.71999999999997, "end": 318.71999999999997, "text": " Once again, scientists at Nvidia knocked it out of the park with this one."}, {"start": 318.71999999999997, "end": 319.71999999999997, "text": " Bravo."}, {"start": 319.71999999999997, "end": 321.71999999999997, "text": " So, what about you?"}, {"start": 321.71999999999997, "end": 326.71999999999997, "text": " If all this can be done in real time, what would you use this for?"}, {"start": 326.72, "end": 328.32000000000005, "text": " I'd love to know."}, {"start": 328.32000000000005, "end": 330.52000000000004, "text": " Please let me know in the comments below."}, {"start": 330.52000000000004, "end": 334.12, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 334.12, "end": 340.12, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 340.12, "end": 346.92, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 346.92, "end": 354.52000000000004, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 354.52, "end": 359.91999999999996, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 359.91999999999996, "end": 368.12, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers."}, {"start": 368.12, "end": 374.91999999999996, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 374.91999999999996, "end": 380.52, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 380.52, "end": 384.91999999999996, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=FYVf0bRgO5Q
DeepMind AlphaFold: A Gift To Humanity! 🧬
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper "Highly accurate protein structure prediction with #AlphaFold" is available here: https://www.nature.com/articles/s41586-021-03819-2 https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology Protein database: https://alphafold.ebi.ac.uk/ More on this: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology https://deepmind.com/research/case-studies/alphafold https://deepmind.com/blog/article/putting-the-power-of-alphafold-into-the-worlds-hands ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters: 00:00 What is protein folding? 01:00 Is it so hard! 04:08 CASP 05:15 AlphaFold's results 06:41 Let's look inside! 07:41 Iterative refinement 08:41 Convolutional neural networks 08:51 Transformers 09:55 Everything matters! 10:58 Adding physics knowledge 13:00 What is this good for? 13:33 A gift to humanity! 14:05 Future works 14:56 AlphaFold's weaknesses 15:16 An important message Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Oh my goodness! This work is history in the making. Today we are going to have a look at Alpha Fold, perhaps one of the most important papers of the last few years. And you will see that nothing that came before even comes close to it and that it truly is a gift to humanity. So what is Alpha Fold? Alpha Fold is an AI that is capable of solving protein structure prediction which we will refer to as protein folding. Okay, but what is a protein and why does it need folding? A protein is a string of amino acids. These are the building blocks of life. This is what goes in which in reality has a 3D structure. And that is protein folding. Letters go in and a 3D object comes out. This is hard. How hard exactly? Well, let's compare it to DeepMind's amazing previous projects and we'll see that none of these projects even come close in difficulty. For instance, DeepMind's previous AI learned to play chess. Now, why does this matter as we already have D-Blue which is a chess computer that can play at the very least as well as Casper of Dead and it was built in 1995? So, why is chess interesting? Well, the space of possible moves is huge. And D-Blue in 1995 was not an AI in the stricter sense, but a handcrafted technique. This means that it can play chess and that's it. One algorithm, one game. If you want it to play a different game, you write a different algorithm. And yes, that is the key difference. DeepMind's chess AI is a general learning algorithm that can learn many games. For instance, Japanese chess show G2. One algorithm, many games. And yes, chess is hard, but these days the AI can manage. Then, GO is the next level. This is not just hard, it is really hard. The space of possible moves is significantly bigger and we can't just evaluate all the long-term effects of our moves. It is even more hopeless than chess. And that's often why people say that this game requires some sort of intuition to play. But DeepMind's AI solved that too and beat the world champion GO player 4-1 in a huge media event. The AI can still manage. Now, get this. If chess is hard and GO is very hard, then protein folding is sinfully difficult. Once again, the string of text and coding the amino acids go in and a 3D structure comes out. Why is this hard? Why not just try every possible 3D structure and see what sticks? Well, not quite. The third space for this problem is still stupendously large, perhaps not as big as playing a continuous strategy game like Starcraft 2, but the search here is much less forgiving. Also, we don't have access to a perfect scoring function, so it is very difficult to define what exactly should be learned. You see, in a strategy game, a win is a win, but for proteins, nature doesn't really tell us what it is up to when creating these structures. Thus, DeepMind did very well in chess and GO and Starcraft 2 and challenging as they are, they are not even close to being as challenging as protein folding. Not even close. To demonstrate that, look, this is Casp. I've heard DeepMind CEO Demis Hassabis call it the Olympics of protein folding. If you look at how teams of scientists prepare for this event, you will probably agree that yes, this is indeed the Olympics of protein folding. At about a score of 90, we can think of protein folding as a mostly solved problem. But, no need to worry about definitions, though. Look, we are not even close to 90. And it gets even worse. Look, this GDT score means the global distance test. This is a measure of similarity between the predicted and the real protein structure. And, wait a second, what? The results are not only not too good, but they appear to get worse over time. Is that true? What is going on here? Well, there is an explanation. The competition gets a little harder over time, so even flat results mean that there is a little improvement over time. And now, hold on to your papers, and let's look at the results from DeepMind's AIB solution, AlphaFold. Wow, now we're talking. Look at that. The competition gets harder, and it is not only flat, but can that really be, it is even better than the previous methods. But, we are not done here. No, no, not even close. If you have been holding onto your papers so far, now squeeze that paper, because what you see here is old news. Only two years later, AlphaFold 2 appeared. And, just look at that. It came in GAN's blazing. So much so that the result is, I can't believe it. It is around the 90th mark. My goodness, that is history in the making. Yes, this is the place on the internet, where we get unreasonably excited by a large blue bar. Welcome to two-minute papers. But, what does this really mean? Well, in absolute terms, AlphaFold 2 is considered to be about three times better than previous solutions. And, all that in just two years. That is a miracle right in front of our eyes. Now, let's pop the hood and see what is inside this AI. And, hmm, look at all these elements in the system that make this happen. So, where do we even start? Which of these is the most important? What is the key? Well, everything. And, nothing. I will explain this in a moment. That does not sound very enlightening. So, what is going on? Well, indeed, mind ran a detailed ablation study on what mattered and the result is the following. Everything mattered. Look, with few exceptions, every part adds its own little piece to the final result. But, none of these techniques are a silver bullet. But, to understand a bit more about what is going on here, let's look at three things. One, AlphaFold 2 is an end-to-end network that can perform iterative refinement. What do these mean? What this means is that everything needed to solve the task is learned by the network. And, that it starts out from a rough initial guess. And then, it gradually improves it. You see this process here, and it truly is a sight to behold. Two, it uses an attention-based model. What does that mean? Well, look. This is a convolutional neural network. This is wired in a way that information flows to neighboring neurons. This is great for image recognition, because usually, the required information is located nearby. For instance, let's imagine that we wish to train a neural network that can recognize a dog. What do we need to look at? Well, floppy ears, a black snout, fur, okay, we're good, we can conclude that we have a dog here. Now, have you noticed? Yes, all of this information is located nearby. Therefore, a convolutional neural network is expected to do really well at that. However, check this out. This is a transformer, which is an attention-based model. Here, information does not flow between neighbors, no sir. Here, information flows everywhere. This has spontaneous connections that are great for almost anything if we can use them well. For instance, when reading a book, if we are at page 100, we might need to recall some information from page 1. Transformers are excellent for tasks like that. They are still quite new, just a few years old, and are already making breakthroughs. So, why use them for protein folding? Well, things that are 200 amino acids apart in the text description can still be right next to each other in the 3D space. Yes, now we know that for that we need attention networks, for instance, a transformer. These are seeing a great deal of use these days. For instance, Tesla also uses them for training their self-driving cars. Yes, so these things mattered. But so many other things did too. Now, I mentioned that the key is everything. And nothing. What does that mean? Well, look here. Apart from a couple examples, there is no silver bullet here. Every single one of these improvements bummed the score a little bit. But all of them are needed for the breakthrough. Now, one of the important elements is also adding physics knowledge. How do you do that? Well, typically the answer is that you don't. You see, when we design a handcrafted technique, we write the knowledge into an algorithm by hand. For instance, in chess, there are a bunch of well-known openings for the algorithm to consider. For protein folding, we can tell the algorithm that if you see this structure, it typically bends this way. Or we can also show it common protein templates kind of like openings for chess. We can add all these valuable expertise to a handcrafted technique. Now, we noted that scientists at DeepMind decided to use an end-to-end learning system. I would like to unpack that for a moment because this design decision is not trivial at all. In fact, in a moment, I bet you will think that it's flat out counterintuitive. Let me explain. If we are a physics simulation researcher, and we have a physics simulation program, we take our physics knowledge and write a computer program to make use of that knowledge. For instance, here you see this being used to great effect, so much so that what you see here is not reality, but a physics simulation. All handcrafted. Clearly, using this concept, we can see that human ingenuity goes very far, and we can write super powerful programs. Or we can do end-to-end learning, where, surprisingly, we don't write our knowledge into the algorithm at all. We give it training data instead, and let the AI build up its own knowledge base from that. And AlphaFold is an end-to-end learning project, so almost everything is learned. Almost. And one of the great challenges of this project was to infuse the AI with physics knowledge without impacting the learning. That is super hard. So, training her? How long does this take? Well, get this. Deep-mind can train this incredible folding AI in as little as two weeks. Hmm, why is two weeks so little? Well, after this step is done, the AI can be given a new input and will be able to create this 3D structure in about a minute. And we can then reuse this training neural network for as long as we wish. Phew, so this is a lot of trouble to fold these proteins. So, what is all this good for? The list of applications is very impressive. I'll give you just a small subset of them that I really liked. For instance, it helps us better understand the human body or create better medicine against malaria and many other diseases develop more healthy food or develop enzymes to break down plastic waste and that is just the start. Now, you're probably asking, Karoy, you keep saying that this is a gift to humanity. So, why is it a gift to humanity? Well, here comes the best part. A little after publishing the paper, DeepMind made these 3D structure predictions available free for everyone. For instance, they have made their human protein predictions public. Beyond that, they already have made their predictions public for yeast, important pathogens, crop species and more. And thus, I have already seen follow-up works on how to use this for developing new drugs. What a time to be alive! Now, note that this is but one step in a thousand-step journey. But one important step nonetheless. And I would like to send huge congratulations to DeepMind. Something like this costs a ton to develop a note that it is not easy or maybe not even possible to immediately make a product out of this and monetize it. This truly is a gift to humanity. And a project like this can only emerge from proper long-term thinking that focuses on what matters in the long term. Not just thinking about what is right now. A bravo. Now, of course, not even AlphaFull2 is perfect. For instance, it's not always very confident about its own solutions. And it also performs poorly in antibody interactions. Both of these are subject to intense scrutiny and follow-up papers are already appearing in these directions. Now, one last thing. Why does this video exist? I got a lot of questions from you asking why I made no video on AlphaFull2. Well, protein folding is a highly multidisciplinary problem which, beyond machine learning, requires tons of knowledge in biology, physics, and engineering. Thus, my answer was that I don't feel qualified to speak about this project, so I better not. However, something has changed. What has changed? Well, now I had the help of someone who is very qualified. As qualified as it gets, because it is the one and only John Jumper, the first author of the paper who kindly agreed to review the contents of this video to make sure that I did not mess up too badly. Thus, I would like to send a big thank you to John, his team, and deep-mind for creating AlphaFull2 and helping this video come into existence. It came late, so we missed out on a ton of views, but that doesn't matter. What matters is that you get an easy to understand and accurate description of AlphaFull2. Thank you so much for your patience. This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper-parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.me slash paper intro, or just click the link in the video description, and try this 10-minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.72, "end": 6.640000000000001, "text": " Oh my goodness!"}, {"start": 6.640000000000001, "end": 9.68, "text": " This work is history in the making."}, {"start": 9.68, "end": 12.96, "text": " Today we are going to have a look at Alpha Fold,"}, {"start": 12.96, "end": 17.6, "text": " perhaps one of the most important papers of the last few years."}, {"start": 17.6, "end": 22.96, "text": " And you will see that nothing that came before even comes close to it"}, {"start": 22.96, "end": 26.64, "text": " and that it truly is a gift to humanity."}, {"start": 26.64, "end": 28.88, "text": " So what is Alpha Fold?"}, {"start": 28.88, "end": 34.4, "text": " Alpha Fold is an AI that is capable of solving protein structure prediction"}, {"start": 34.4, "end": 37.84, "text": " which we will refer to as protein folding."}, {"start": 37.84, "end": 43.28, "text": " Okay, but what is a protein and why does it need folding?"}, {"start": 43.28, "end": 46.4, "text": " A protein is a string of amino acids."}, {"start": 46.4, "end": 49.28, "text": " These are the building blocks of life."}, {"start": 49.28, "end": 54.480000000000004, "text": " This is what goes in which in reality has a 3D structure."}, {"start": 54.480000000000004, "end": 56.879999999999995, "text": " And that is protein folding."}, {"start": 56.88, "end": 61.36, "text": " Letters go in and a 3D object comes out."}, {"start": 61.36, "end": 63.040000000000006, "text": " This is hard."}, {"start": 63.040000000000006, "end": 64.8, "text": " How hard exactly?"}, {"start": 64.8, "end": 69.28, "text": " Well, let's compare it to DeepMind's amazing previous projects"}, {"start": 69.28, "end": 74.88, "text": " and we'll see that none of these projects even come close in difficulty."}, {"start": 74.88, "end": 80.0, "text": " For instance, DeepMind's previous AI learned to play chess."}, {"start": 80.0, "end": 83.76, "text": " Now, why does this matter as we already have D-Blue"}, {"start": 83.76, "end": 90.24000000000001, "text": " which is a chess computer that can play at the very least as well as Casper of Dead"}, {"start": 90.24000000000001, "end": 94.08000000000001, "text": " and it was built in 1995?"}, {"start": 94.08000000000001, "end": 96.96000000000001, "text": " So, why is chess interesting?"}, {"start": 96.96000000000001, "end": 100.64, "text": " Well, the space of possible moves is huge."}, {"start": 100.64, "end": 106.4, "text": " And D-Blue in 1995 was not an AI in the stricter sense,"}, {"start": 106.4, "end": 108.72, "text": " but a handcrafted technique."}, {"start": 108.72, "end": 112.4, "text": " This means that it can play chess and that's it."}, {"start": 112.4, "end": 114.72, "text": " One algorithm, one game."}, {"start": 114.72, "end": 119.52000000000001, "text": " If you want it to play a different game, you write a different algorithm."}, {"start": 119.52000000000001, "end": 122.96000000000001, "text": " And yes, that is the key difference."}, {"start": 122.96000000000001, "end": 126.72, "text": " DeepMind's chess AI is a general learning algorithm"}, {"start": 126.72, "end": 128.8, "text": " that can learn many games."}, {"start": 128.8, "end": 132.48000000000002, "text": " For instance, Japanese chess show G2."}, {"start": 132.48000000000002, "end": 135.28, "text": " One algorithm, many games."}, {"start": 135.28, "end": 141.04000000000002, "text": " And yes, chess is hard, but these days the AI can manage."}, {"start": 141.04, "end": 143.84, "text": " Then, GO is the next level."}, {"start": 143.84, "end": 147.76, "text": " This is not just hard, it is really hard."}, {"start": 147.76, "end": 151.44, "text": " The space of possible moves is significantly bigger"}, {"start": 151.44, "end": 155.84, "text": " and we can't just evaluate all the long-term effects of our moves."}, {"start": 155.84, "end": 158.16, "text": " It is even more hopeless than chess."}, {"start": 158.16, "end": 164.88, "text": " And that's often why people say that this game requires some sort of intuition to play."}, {"start": 164.88, "end": 168.0, "text": " But DeepMind's AI solved that too"}, {"start": 168.0, "end": 174.0, "text": " and beat the world champion GO player 4-1 in a huge media event."}, {"start": 174.0, "end": 176.48, "text": " The AI can still manage."}, {"start": 176.48, "end": 177.84, "text": " Now, get this."}, {"start": 177.84, "end": 181.36, "text": " If chess is hard and GO is very hard,"}, {"start": 181.36, "end": 184.56, "text": " then protein folding is sinfully difficult."}, {"start": 184.56, "end": 189.04, "text": " Once again, the string of text and coding the amino acids go in"}, {"start": 189.04, "end": 192.0, "text": " and a 3D structure comes out."}, {"start": 192.0, "end": 193.52, "text": " Why is this hard?"}, {"start": 193.52, "end": 196.96, "text": " Why not just try every possible 3D structure"}, {"start": 196.96, "end": 198.8, "text": " and see what sticks?"}, {"start": 198.8, "end": 200.48000000000002, "text": " Well, not quite."}, {"start": 200.48000000000002, "end": 204.64000000000001, "text": " The third space for this problem is still stupendously large,"}, {"start": 204.64000000000001, "end": 210.48000000000002, "text": " perhaps not as big as playing a continuous strategy game like Starcraft 2,"}, {"start": 210.48000000000002, "end": 213.92000000000002, "text": " but the search here is much less forgiving."}, {"start": 213.92000000000002, "end": 217.68, "text": " Also, we don't have access to a perfect scoring function,"}, {"start": 217.68, "end": 222.48000000000002, "text": " so it is very difficult to define what exactly should be learned."}, {"start": 222.48000000000002, "end": 225.76000000000002, "text": " You see, in a strategy game, a win is a win,"}, {"start": 225.76, "end": 229.76, "text": " but for proteins, nature doesn't really tell us"}, {"start": 229.76, "end": 232.88, "text": " what it is up to when creating these structures."}, {"start": 232.88, "end": 238.07999999999998, "text": " Thus, DeepMind did very well in chess and GO and Starcraft 2"}, {"start": 238.07999999999998, "end": 241.84, "text": " and challenging as they are, they are not even close"}, {"start": 241.84, "end": 245.2, "text": " to being as challenging as protein folding."}, {"start": 245.2, "end": 246.72, "text": " Not even close."}, {"start": 246.72, "end": 250.32, "text": " To demonstrate that, look, this is Casp."}, {"start": 250.32, "end": 253.44, "text": " I've heard DeepMind CEO Demis Hassabis"}, {"start": 253.44, "end": 256.4, "text": " call it the Olympics of protein folding."}, {"start": 256.4, "end": 260.32, "text": " If you look at how teams of scientists prepare for this event,"}, {"start": 260.32, "end": 265.84, "text": " you will probably agree that yes, this is indeed the Olympics of protein folding."}, {"start": 265.84, "end": 270.0, "text": " At about a score of 90, we can think of protein folding"}, {"start": 270.0, "end": 272.4, "text": " as a mostly solved problem."}, {"start": 272.4, "end": 275.44, "text": " But, no need to worry about definitions, though."}, {"start": 275.44, "end": 278.88, "text": " Look, we are not even close to 90."}, {"start": 278.88, "end": 281.12, "text": " And it gets even worse."}, {"start": 281.12, "end": 285.44, "text": " Look, this GDT score means the global distance test."}, {"start": 285.44, "end": 289.76, "text": " This is a measure of similarity between the predicted"}, {"start": 289.76, "end": 292.16, "text": " and the real protein structure."}, {"start": 292.16, "end": 295.76, "text": " And, wait a second, what?"}, {"start": 295.76, "end": 298.4, "text": " The results are not only not too good,"}, {"start": 298.4, "end": 302.16, "text": " but they appear to get worse over time."}, {"start": 302.16, "end": 303.44, "text": " Is that true?"}, {"start": 303.44, "end": 305.12, "text": " What is going on here?"}, {"start": 305.12, "end": 307.76, "text": " Well, there is an explanation."}, {"start": 307.76, "end": 311.12, "text": " The competition gets a little harder over time,"}, {"start": 311.12, "end": 317.28, "text": " so even flat results mean that there is a little improvement over time."}, {"start": 317.28, "end": 319.36, "text": " And now, hold on to your papers,"}, {"start": 319.36, "end": 324.96, "text": " and let's look at the results from DeepMind's AIB solution, AlphaFold."}, {"start": 324.96, "end": 326.8, "text": " Wow, now we're talking."}, {"start": 327.52, "end": 328.96, "text": " Look at that."}, {"start": 328.96, "end": 332.64, "text": " The competition gets harder, and it is not only flat,"}, {"start": 332.64, "end": 337.68, "text": " but can that really be, it is even better than the previous methods."}, {"start": 338.4, "end": 340.0, "text": " But, we are not done here."}, {"start": 340.71999999999997, "end": 343.03999999999996, "text": " No, no, not even close."}, {"start": 343.03999999999996, "end": 345.68, "text": " If you have been holding onto your papers so far,"}, {"start": 345.68, "end": 350.88, "text": " now squeeze that paper, because what you see here is old news."}, {"start": 351.52, "end": 355.28, "text": " Only two years later, AlphaFold 2 appeared."}, {"start": 356.0, "end": 358.56, "text": " And, just look at that."}, {"start": 358.56, "end": 361.2, "text": " It came in GAN's blazing."}, {"start": 361.2, "end": 363.12, "text": " So much so that the result is,"}, {"start": 364.24, "end": 366.0, "text": " I can't believe it."}, {"start": 366.0, "end": 368.32, "text": " It is around the 90th mark."}, {"start": 369.12, "end": 372.64, "text": " My goodness, that is history in the making."}, {"start": 373.44, "end": 375.76, "text": " Yes, this is the place on the internet,"}, {"start": 375.76, "end": 379.44, "text": " where we get unreasonably excited by a large blue bar."}, {"start": 380.15999999999997, "end": 381.76, "text": " Welcome to two-minute papers."}, {"start": 382.4, "end": 384.24, "text": " But, what does this really mean?"}, {"start": 384.88, "end": 389.36, "text": " Well, in absolute terms, AlphaFold 2 is considered to be"}, {"start": 389.36, "end": 392.72, "text": " about three times better than previous solutions."}, {"start": 393.2, "end": 396.0, "text": " And, all that in just two years."}, {"start": 396.56, "end": 399.68, "text": " That is a miracle right in front of our eyes."}, {"start": 400.48, "end": 404.48, "text": " Now, let's pop the hood and see what is inside this AI."}, {"start": 404.48, "end": 409.68, "text": " And, hmm, look at all these elements in the system that make this happen."}, {"start": 410.32, "end": 412.24, "text": " So, where do we even start?"}, {"start": 412.8, "end": 414.8, "text": " Which of these is the most important?"}, {"start": 415.36, "end": 416.16, "text": " What is the key?"}, {"start": 416.96000000000004, "end": 418.16, "text": " Well, everything."}, {"start": 418.16, "end": 419.76000000000005, "text": " And, nothing."}, {"start": 419.76000000000005, "end": 421.84000000000003, "text": " I will explain this in a moment."}, {"start": 421.84000000000003, "end": 424.48, "text": " That does not sound very enlightening."}, {"start": 424.48, "end": 426.16, "text": " So, what is going on?"}, {"start": 426.16, "end": 431.12, "text": " Well, indeed, mind ran a detailed ablation study on what mattered"}, {"start": 431.12, "end": 433.44000000000005, "text": " and the result is the following."}, {"start": 433.44000000000005, "end": 435.04, "text": " Everything mattered."}, {"start": 435.04, "end": 441.36, "text": " Look, with few exceptions, every part adds its own little piece to the final result."}, {"start": 441.36, "end": 445.12, "text": " But, none of these techniques are a silver bullet."}, {"start": 445.12, "end": 449.44, "text": " But, to understand a bit more about what is going on here,"}, {"start": 449.44, "end": 451.52, "text": " let's look at three things."}, {"start": 451.52, "end": 455.52, "text": " One, AlphaFold 2 is an end-to-end network"}, {"start": 455.52, "end": 458.08, "text": " that can perform iterative refinement."}, {"start": 458.08, "end": 459.6, "text": " What do these mean?"}, {"start": 459.6, "end": 464.96, "text": " What this means is that everything needed to solve the task is learned by the network."}, {"start": 464.96, "end": 468.16, "text": " And, that it starts out from a rough initial guess."}, {"start": 468.16, "end": 470.96, "text": " And then, it gradually improves it."}, {"start": 470.96, "end": 475.91999999999996, "text": " You see this process here, and it truly is a sight to behold."}, {"start": 475.91999999999996, "end": 479.44, "text": " Two, it uses an attention-based model."}, {"start": 479.44, "end": 480.96, "text": " What does that mean?"}, {"start": 480.96, "end": 482.4, "text": " Well, look."}, {"start": 482.4, "end": 484.96, "text": " This is a convolutional neural network."}, {"start": 484.96, "end": 490.56, "text": " This is wired in a way that information flows to neighboring neurons."}, {"start": 490.56, "end": 493.35999999999996, "text": " This is great for image recognition,"}, {"start": 493.35999999999996, "end": 498.56, "text": " because usually, the required information is located nearby."}, {"start": 498.56, "end": 503.36, "text": " For instance, let's imagine that we wish to train a neural network"}, {"start": 503.36, "end": 505.12, "text": " that can recognize a dog."}, {"start": 505.12, "end": 506.88, "text": " What do we need to look at?"}, {"start": 506.88, "end": 512.56, "text": " Well, floppy ears, a black snout, fur, okay, we're good,"}, {"start": 512.56, "end": 515.12, "text": " we can conclude that we have a dog here."}, {"start": 515.12, "end": 517.28, "text": " Now, have you noticed?"}, {"start": 517.28, "end": 521.52, "text": " Yes, all of this information is located nearby."}, {"start": 521.52, "end": 526.96, "text": " Therefore, a convolutional neural network is expected to do really well at that."}, {"start": 526.96, "end": 529.9200000000001, "text": " However, check this out."}, {"start": 529.9200000000001, "end": 533.9200000000001, "text": " This is a transformer, which is an attention-based model."}, {"start": 533.9200000000001, "end": 538.72, "text": " Here, information does not flow between neighbors, no sir."}, {"start": 538.72, "end": 541.76, "text": " Here, information flows everywhere."}, {"start": 541.76, "end": 546.5600000000001, "text": " This has spontaneous connections that are great for almost anything"}, {"start": 546.5600000000001, "end": 548.72, "text": " if we can use them well."}, {"start": 548.72, "end": 550.88, "text": " For instance, when reading a book,"}, {"start": 550.88, "end": 553.12, "text": " if we are at page 100,"}, {"start": 553.12, "end": 557.12, "text": " we might need to recall some information from page 1."}, {"start": 557.12, "end": 560.48, "text": " Transformers are excellent for tasks like that."}, {"start": 560.48, "end": 563.76, "text": " They are still quite new, just a few years old,"}, {"start": 563.76, "end": 565.92, "text": " and are already making breakthroughs."}, {"start": 566.8, "end": 570.0, "text": " So, why use them for protein folding?"}, {"start": 570.0, "end": 575.12, "text": " Well, things that are 200 amino acids apart in the text description"}, {"start": 575.12, "end": 579.12, "text": " can still be right next to each other in the 3D space."}, {"start": 579.12, "end": 583.92, "text": " Yes, now we know that for that we need attention networks,"}, {"start": 583.92, "end": 586.0, "text": " for instance, a transformer."}, {"start": 586.0, "end": 589.12, "text": " These are seeing a great deal of use these days."}, {"start": 589.12, "end": 593.92, "text": " For instance, Tesla also uses them for training their self-driving cars."}, {"start": 593.92, "end": 597.36, "text": " Yes, so these things mattered."}, {"start": 597.36, "end": 600.64, "text": " But so many other things did too."}, {"start": 600.64, "end": 604.0, "text": " Now, I mentioned that the key is everything."}, {"start": 604.0, "end": 605.6800000000001, "text": " And nothing."}, {"start": 605.6800000000001, "end": 607.28, "text": " What does that mean?"}, {"start": 607.28, "end": 609.28, "text": " Well, look here."}, {"start": 609.28, "end": 611.28, "text": " Apart from a couple examples,"}, {"start": 611.28, "end": 613.28, "text": " there is no silver bullet here."}, {"start": 613.28, "end": 615.28, "text": " Every single one of these improvements"}, {"start": 615.28, "end": 617.28, "text": " bummed the score a little bit."}, {"start": 617.28, "end": 620.4, "text": " But all of them are needed for the breakthrough."}, {"start": 620.4, "end": 625.28, "text": " Now, one of the important elements is also adding physics knowledge."}, {"start": 625.28, "end": 627.28, "text": " How do you do that?"}, {"start": 627.28, "end": 630.48, "text": " Well, typically the answer is that you don't."}, {"start": 630.48, "end": 634.0799999999999, "text": " You see, when we design a handcrafted technique,"}, {"start": 634.08, "end": 638.08, "text": " we write the knowledge into an algorithm by hand."}, {"start": 638.08, "end": 640.08, "text": " For instance, in chess,"}, {"start": 640.08, "end": 644.08, "text": " there are a bunch of well-known openings for the algorithm to consider."}, {"start": 644.08, "end": 646.08, "text": " For protein folding,"}, {"start": 646.08, "end": 649.2800000000001, "text": " we can tell the algorithm that if you see this structure,"}, {"start": 649.2800000000001, "end": 651.2800000000001, "text": " it typically bends this way."}, {"start": 651.2800000000001, "end": 655.2800000000001, "text": " Or we can also show it common protein templates"}, {"start": 655.2800000000001, "end": 657.2800000000001, "text": " kind of like openings for chess."}, {"start": 657.2800000000001, "end": 662.08, "text": " We can add all these valuable expertise to a handcrafted technique."}, {"start": 662.08, "end": 666.08, "text": " Now, we noted that scientists at DeepMind"}, {"start": 666.08, "end": 670.08, "text": " decided to use an end-to-end learning system."}, {"start": 670.08, "end": 672.08, "text": " I would like to unpack that for a moment"}, {"start": 672.08, "end": 676.08, "text": " because this design decision is not trivial at all."}, {"start": 676.08, "end": 682.08, "text": " In fact, in a moment, I bet you will think that it's flat out counterintuitive."}, {"start": 682.08, "end": 683.08, "text": " Let me explain."}, {"start": 683.08, "end": 686.08, "text": " If we are a physics simulation researcher,"}, {"start": 686.08, "end": 688.08, "text": " and we have a physics simulation program,"}, {"start": 688.08, "end": 692.08, "text": " we take our physics knowledge and write a computer program"}, {"start": 692.08, "end": 694.08, "text": " to make use of that knowledge."}, {"start": 694.08, "end": 698.08, "text": " For instance, here you see this being used to great effect,"}, {"start": 698.08, "end": 702.08, "text": " so much so that what you see here is not reality,"}, {"start": 702.08, "end": 704.08, "text": " but a physics simulation."}, {"start": 704.08, "end": 706.08, "text": " All handcrafted."}, {"start": 706.08, "end": 708.08, "text": " Clearly, using this concept,"}, {"start": 708.08, "end": 712.08, "text": " we can see that human ingenuity goes very far,"}, {"start": 712.08, "end": 715.08, "text": " and we can write super powerful programs."}, {"start": 715.08, "end": 719.08, "text": " Or we can do end-to-end learning, where, surprisingly,"}, {"start": 719.08, "end": 723.08, "text": " we don't write our knowledge into the algorithm at all."}, {"start": 723.08, "end": 731.08, "text": " We give it training data instead, and let the AI build up its own knowledge base from that."}, {"start": 731.08, "end": 734.08, "text": " And AlphaFold is an end-to-end learning project,"}, {"start": 734.08, "end": 737.08, "text": " so almost everything is learned."}, {"start": 737.08, "end": 738.08, "text": " Almost."}, {"start": 738.08, "end": 741.08, "text": " And one of the great challenges of this project"}, {"start": 741.08, "end": 745.08, "text": " was to infuse the AI with physics knowledge"}, {"start": 745.08, "end": 748.08, "text": " without impacting the learning."}, {"start": 748.08, "end": 750.08, "text": " That is super hard."}, {"start": 750.08, "end": 752.08, "text": " So, training her?"}, {"start": 752.08, "end": 754.08, "text": " How long does this take?"}, {"start": 754.08, "end": 755.08, "text": " Well, get this."}, {"start": 755.08, "end": 759.08, "text": " Deep-mind can train this incredible folding AI"}, {"start": 759.08, "end": 762.08, "text": " in as little as two weeks."}, {"start": 762.08, "end": 765.08, "text": " Hmm, why is two weeks so little?"}, {"start": 765.08, "end": 770.08, "text": " Well, after this step is done, the AI can be given a new input"}, {"start": 770.08, "end": 775.08, "text": " and will be able to create this 3D structure in about a minute."}, {"start": 775.08, "end": 779.08, "text": " And we can then reuse this training neural network"}, {"start": 779.08, "end": 781.08, "text": " for as long as we wish."}, {"start": 781.08, "end": 786.08, "text": " Phew, so this is a lot of trouble to fold these proteins."}, {"start": 786.08, "end": 789.08, "text": " So, what is all this good for?"}, {"start": 789.08, "end": 792.08, "text": " The list of applications is very impressive."}, {"start": 792.08, "end": 796.08, "text": " I'll give you just a small subset of them that I really liked."}, {"start": 796.08, "end": 800.08, "text": " For instance, it helps us better understand the human body"}, {"start": 800.08, "end": 803.08, "text": " or create better medicine against malaria"}, {"start": 803.08, "end": 808.08, "text": " and many other diseases develop more healthy food"}, {"start": 808.08, "end": 811.08, "text": " or develop enzymes to break down plastic waste"}, {"start": 811.08, "end": 813.08, "text": " and that is just the start."}, {"start": 813.08, "end": 816.08, "text": " Now, you're probably asking, Karoy,"}, {"start": 816.08, "end": 819.08, "text": " you keep saying that this is a gift to humanity."}, {"start": 819.08, "end": 823.08, "text": " So, why is it a gift to humanity?"}, {"start": 823.08, "end": 826.08, "text": " Well, here comes the best part."}, {"start": 826.08, "end": 828.08, "text": " A little after publishing the paper,"}, {"start": 828.08, "end": 832.08, "text": " DeepMind made these 3D structure predictions available"}, {"start": 832.08, "end": 834.08, "text": " free for everyone."}, {"start": 834.08, "end": 838.08, "text": " For instance, they have made their human protein predictions public."}, {"start": 838.08, "end": 842.08, "text": " Beyond that, they already have made their predictions public"}, {"start": 842.08, "end": 846.08, "text": " for yeast, important pathogens, crop species and more."}, {"start": 846.08, "end": 849.08, "text": " And thus, I have already seen follow-up works"}, {"start": 849.08, "end": 853.08, "text": " on how to use this for developing new drugs."}, {"start": 853.08, "end": 855.08, "text": " What a time to be alive!"}, {"start": 855.08, "end": 860.08, "text": " Now, note that this is but one step in a thousand-step journey."}, {"start": 860.08, "end": 863.08, "text": " But one important step nonetheless."}, {"start": 863.08, "end": 867.08, "text": " And I would like to send huge congratulations to DeepMind."}, {"start": 867.08, "end": 870.08, "text": " Something like this costs a ton to develop"}, {"start": 870.08, "end": 875.08, "text": " a note that it is not easy or maybe not even possible"}, {"start": 875.08, "end": 880.08, "text": " to immediately make a product out of this and monetize it."}, {"start": 880.08, "end": 883.08, "text": " This truly is a gift to humanity."}, {"start": 883.08, "end": 888.08, "text": " And a project like this can only emerge from proper long-term thinking"}, {"start": 888.08, "end": 891.08, "text": " that focuses on what matters in the long term."}, {"start": 891.08, "end": 895.08, "text": " Not just thinking about what is right now."}, {"start": 895.08, "end": 896.08, "text": " A bravo."}, {"start": 896.08, "end": 900.08, "text": " Now, of course, not even AlphaFull2 is perfect."}, {"start": 900.08, "end": 904.08, "text": " For instance, it's not always very confident about its own solutions."}, {"start": 904.08, "end": 909.08, "text": " And it also performs poorly in antibody interactions."}, {"start": 909.08, "end": 912.08, "text": " Both of these are subject to intense scrutiny"}, {"start": 912.08, "end": 916.08, "text": " and follow-up papers are already appearing in these directions."}, {"start": 916.08, "end": 919.08, "text": " Now, one last thing."}, {"start": 919.08, "end": 921.08, "text": " Why does this video exist?"}, {"start": 921.08, "end": 928.08, "text": " I got a lot of questions from you asking why I made no video on AlphaFull2."}, {"start": 928.08, "end": 932.08, "text": " Well, protein folding is a highly multidisciplinary problem"}, {"start": 932.08, "end": 937.08, "text": " which, beyond machine learning, requires tons of knowledge in biology,"}, {"start": 937.08, "end": 940.08, "text": " physics, and engineering."}, {"start": 940.08, "end": 945.08, "text": " Thus, my answer was that I don't feel qualified to speak about this project,"}, {"start": 945.08, "end": 947.08, "text": " so I better not."}, {"start": 947.08, "end": 950.08, "text": " However, something has changed."}, {"start": 950.08, "end": 952.08, "text": " What has changed?"}, {"start": 952.08, "end": 957.08, "text": " Well, now I had the help of someone who is very qualified."}, {"start": 957.08, "end": 962.08, "text": " As qualified as it gets, because it is the one and only John Jumper,"}, {"start": 962.08, "end": 968.08, "text": " the first author of the paper who kindly agreed to review the contents of this video"}, {"start": 968.08, "end": 971.08, "text": " to make sure that I did not mess up too badly."}, {"start": 971.08, "end": 976.08, "text": " Thus, I would like to send a big thank you to John, his team,"}, {"start": 976.08, "end": 982.08, "text": " and deep-mind for creating AlphaFull2 and helping this video come into existence."}, {"start": 982.08, "end": 987.08, "text": " It came late, so we missed out on a ton of views, but that doesn't matter."}, {"start": 987.08, "end": 994.08, "text": " What matters is that you get an easy to understand and accurate description of AlphaFull2."}, {"start": 994.08, "end": 997.08, "text": " Thank you so much for your patience."}, {"start": 997.08, "end": 1000.08, "text": " This video has been supported by weights and biases."}, {"start": 1000.08, "end": 1005.08, "text": " Being a machine learning researcher means doing tons of experiments"}, {"start": 1005.08, "end": 1008.08, "text": " and, of course, creating tons of data."}, {"start": 1008.08, "end": 1012.08, "text": " But I am not looking for data, I am looking for insights."}, {"start": 1012.08, "end": 1016.08, "text": " And weights and biases helps with exactly that."}, {"start": 1016.08, "end": 1020.08, "text": " They have tools for experiment tracking, data set and model versioning,"}, {"start": 1020.08, "end": 1023.08, "text": " and even hyper-parameter optimization."}, {"start": 1023.08, "end": 1028.08, "text": " No wonder this is the experiment tracking tool choice of open AI,"}, {"start": 1028.08, "end": 1032.08, "text": " Toyota Research, Samsung, and many more prestigious labs."}, {"start": 1032.08, "end": 1038.08, "text": " Make sure to use the link WNB.me slash paper intro,"}, {"start": 1038.08, "end": 1041.08, "text": " or just click the link in the video description,"}, {"start": 1041.08, "end": 1045.08, "text": " and try this 10-minute example of weights and biases today"}, {"start": 1045.08, "end": 1049.08, "text": " to experience the wonderful feeling of training a neural network"}, {"start": 1049.08, "end": 1053.08, "text": " and being in control of your experiments."}, {"start": 1053.08, "end": 1055.08, "text": " After you try it, you won't want to go back."}, {"start": 1055.08, "end": 1062.08, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XM-rKTOyD_k
This Blind Robot Can Walk...But How? 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (Thank you Soumik Rakshit!): http://wandb.me/perceptive-locomotion 📝 The paper "Learning robust perceptive locomotion for quadrupedal robots in the wild" is available here: https://leggedrobotics.github.io/rl-perceptiveloco/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir. Today, we are going to see how a robot doggy can help us explore dangerous areas without putting humans at risk. Well, not this one, mind you, because this is not performing too well. Why is that? Well, this is a proprioceptive robot. This means that the robot is essentially blind. Yes, really. It only senses its own internal state and that's it. For instance, it knows about its orientation and twist of the base unit, a little joint information like positions and velocities, and that's about it. It is still remarkable that it can navigate at all. However, this often means that the robot has to feel out the terrain before being able to navigate on it. Why would we do that? Well, if we have difficult seeing conditions, for instance dust, fog, or smoke, a robot that does not rely on seeing is suddenly super useful. However, when we have good seeing conditions, as you see, we have to be able to do better than this. So, can we? Well, have a look at this new technique. Hmm, now we're talking. This new robot is extra-oceptive, which means that it has cameras, it can see, but it also has proprioceptive sensors too. These are the ones that tell the orientation and similar information. But this new one fuses together proprioception and extra-oception. What does that mean? Well, it can see and it can feel too. Kind of. The best of both worlds. Here you see how it sees the world. Thus, it does not need to feel out the terrain before the navigation, but, and here's the key. Now, hold on to your papers, because it can even navigate reasonably well, even if its sensors get covered. Look. That is absolutely amazing. And with the covers removed, we can give it another try, let's see the difference, and oh yeah, back in the game, baby. So far, great news, but what are the limits of this machine? Can it climb stairs? Well, let's have a look. Yep, another problem. Not only that, but it can even go on a long hike in Switzerland and reach the summit without any issues, and in general, it can navigate a variety of really difficult terrains. So, if it can do all that, now let's be really tough on it and put it to the real test. This is the testing footage before the grand challenge. And flying colors. So, let's see how it did in the grand challenge itself. Oh my, you see here, it has to navigate an underground environment completely, autonomously. This really is the ultimate test. No help is allowed, it has to figure out everything by itself. And this test has uneven terrain with lots of rubble, difficult seeing conditions, dust, tunnels, caves, you name it, an absolute nightmare scenario. And this is incredible. During this challenge, it has explored more than a mile, and how many times has it fallen? Well, what do you think? Zero. Yes, zero times. Wow, absolutely amazing. While we look at how well it can navigate the super slippery and squishy terrains, let's ask the key question. What is all this good for? Well, chiefly for exploring, under-explored, and dangerous areas. This means tons of useful applications, for instance, a variant of this could help save humans stuck under rubble, or perhaps even explore other planets without putting humans at risk. And more. And what do you think? What would you use this for? Please let me know in the comments below. What you see here is a report of this exact paper we have talked about, which was made by Wates and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wates and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wates and Biasis is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wates and Biasis for their long standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir."}, {"start": 4.64, "end": 12.24, "text": " Today, we are going to see how a robot doggy can help us explore dangerous areas without"}, {"start": 12.24, "end": 14.24, "text": " putting humans at risk."}, {"start": 15.44, "end": 20.240000000000002, "text": " Well, not this one, mind you, because this is not performing too well."}, {"start": 20.24, "end": 29.36, "text": " Why is that? Well, this is a proprioceptive robot. This means that the robot is essentially blind."}, {"start": 30.08, "end": 36.72, "text": " Yes, really. It only senses its own internal state and that's it. For instance,"}, {"start": 36.72, "end": 43.04, "text": " it knows about its orientation and twist of the base unit, a little joint information"}, {"start": 43.04, "end": 49.76, "text": " like positions and velocities, and that's about it. It is still remarkable that it can navigate"}, {"start": 49.76, "end": 57.519999999999996, "text": " at all. However, this often means that the robot has to feel out the terrain before being able to"}, {"start": 57.519999999999996, "end": 63.519999999999996, "text": " navigate on it. Why would we do that? Well, if we have difficult seeing conditions,"}, {"start": 63.519999999999996, "end": 71.68, "text": " for instance dust, fog, or smoke, a robot that does not rely on seeing is suddenly super useful."}, {"start": 71.68, "end": 79.28, "text": " However, when we have good seeing conditions, as you see, we have to be able to do better than this."}, {"start": 79.92, "end": 88.08000000000001, "text": " So, can we? Well, have a look at this new technique. Hmm, now we're talking. This new robot"}, {"start": 88.08000000000001, "end": 96.08000000000001, "text": " is extra-oceptive, which means that it has cameras, it can see, but it also has proprioceptive"}, {"start": 96.08, "end": 102.4, "text": " sensors too. These are the ones that tell the orientation and similar information."}, {"start": 103.03999999999999, "end": 110.24, "text": " But this new one fuses together proprioception and extra-oception. What does that mean?"}, {"start": 110.88, "end": 120.16, "text": " Well, it can see and it can feel too. Kind of. The best of both worlds. Here you see how it sees"}, {"start": 120.16, "end": 127.36, "text": " the world. Thus, it does not need to feel out the terrain before the navigation, but, and here's the"}, {"start": 127.36, "end": 134.56, "text": " key. Now, hold on to your papers, because it can even navigate reasonably well, even if its"}, {"start": 134.56, "end": 147.51999999999998, "text": " sensors get covered. Look. That is absolutely amazing. And with the covers removed, we can give it"}, {"start": 147.52, "end": 156.56, "text": " another try, let's see the difference, and oh yeah, back in the game, baby. So far, great news,"}, {"start": 156.56, "end": 163.44, "text": " but what are the limits of this machine? Can it climb stairs? Well, let's have a look."}, {"start": 166.4, "end": 173.52, "text": " Yep, another problem. Not only that, but it can even go on a long hike in Switzerland and"}, {"start": 173.52, "end": 180.56, "text": " reach the summit without any issues, and in general, it can navigate a variety of really difficult"}, {"start": 180.56, "end": 190.16000000000003, "text": " terrains. So, if it can do all that, now let's be really tough on it and put it to the real test."}, {"start": 192.4, "end": 196.08, "text": " This is the testing footage before the grand challenge."}, {"start": 196.08, "end": 208.16000000000003, "text": " And flying colors. So, let's see how it did in the grand challenge itself."}, {"start": 209.52, "end": 216.8, "text": " Oh my, you see here, it has to navigate an underground environment completely, autonomously."}, {"start": 217.44, "end": 225.04000000000002, "text": " This really is the ultimate test. No help is allowed, it has to figure out everything by itself."}, {"start": 225.04, "end": 232.88, "text": " And this test has uneven terrain with lots of rubble, difficult seeing conditions, dust,"}, {"start": 232.88, "end": 240.56, "text": " tunnels, caves, you name it, an absolute nightmare scenario. And this is incredible."}, {"start": 240.56, "end": 246.56, "text": " During this challenge, it has explored more than a mile, and how many times has it fallen?"}, {"start": 246.56, "end": 255.76, "text": " Well, what do you think? Zero. Yes, zero times. Wow, absolutely amazing."}, {"start": 255.76, "end": 261.76, "text": " While we look at how well it can navigate the super slippery and squishy terrains,"}, {"start": 261.76, "end": 268.96, "text": " let's ask the key question. What is all this good for? Well, chiefly for exploring,"}, {"start": 268.96, "end": 276.47999999999996, "text": " under-explored, and dangerous areas. This means tons of useful applications, for instance,"}, {"start": 276.47999999999996, "end": 284.96, "text": " a variant of this could help save humans stuck under rubble, or perhaps even explore other planets"}, {"start": 284.96, "end": 292.88, "text": " without putting humans at risk. And more. And what do you think? What would you use this for?"}, {"start": 292.88, "end": 299.12, "text": " Please let me know in the comments below. What you see here is a report of this exact paper we"}, {"start": 299.12, "end": 304.32, "text": " have talked about, which was made by Wates and Biasis. I put a link to it in the description."}, {"start": 304.32, "end": 308.4, "text": " Make sure to have a look. I think it helps you understand this paper better."}, {"start": 309.04, "end": 314.32, "text": " Wates and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 314.32, "end": 319.84, "text": " Using their system, you can create beautiful reports like this one to explain your findings"}, {"start": 319.84, "end": 325.44, "text": " to your colleagues better. It is used by many prestigious labs, including OpenAI,"}, {"start": 325.44, "end": 331.84, "text": " Toyota Research, GitHub, and more. And the best part is that Wates and Biasis is free"}, {"start": 331.84, "end": 338.0, "text": " for all individuals, academics, and open source projects. Make sure to visit them through"}, {"start": 338.0, "end": 345.59999999999997, "text": " wnb.com slash papers, or just click the link in the video description, and you can get a free demo"}, {"start": 345.6, "end": 351.28000000000003, "text": " today. Our thanks to Wates and Biasis for their long standing support, and for helping us"}, {"start": 351.28000000000003, "end": 356.56, "text": " make better videos for you. Thanks for watching and for your generous support, and I'll see you"}, {"start": 356.56, "end": 385.52, "text": " next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=x_cxDgR1x-c
DeepMind's New AI: As Smart As An Engineer... Kind Of! 🤯
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Competition-Level Code Generation with AlphaCode" is available here: https://alphacode.deepmind.com/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #deepmind #alphacode
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir. You will not believe the things you will see in this episode. I promise. Earlier we saw that OpenEIS GPT-3 and Codex AI techniques can be used to solve grade school level math brain teasers, then they were improved to be able to solve university level math questions. Note that this technique required additional help to do that. And a follow-up work could even take a crack at roughly a third of mathematical, Olympiad level math problems. And now let's see what the excellent scientists at deep-mind have been up to in the meantime. Check this out. This is Alpha Code. What is that? Well, in college, computer scientists learn how to program those computers. Now deep-mind decided to instead teach the computers how to program themselves. Wow, now here you see an absolute miracle in the making. Look, here is the description of the problem and here is the solution. Well, I hear you saying, Karoy, there is no solution here and you are indeed right, just give it a second. Yes, now hold on to your papers and marvel at how this AI is coding up the solution right now in front of our eyes. But that's nothing we can also ask what the neural network is looking at. Check this out, it is peeping at different important parts of the problem statement and proceeds to write the solution taking these into consideration. You see that it also looks at different parts of the previous code that it had written to make sure that the new additions are consistent with those. Wow, that is absolutely amazing. I almost feel like watching a human solve this problem. Well, it solved this problem correctly. Unbelievable. So, how good is it? Well, it can solve about 34% of the problems in this dataset. What does that mean? Is that good or bad? Now if we have been holding on to your paper so far, squeeze that paper because it means that it roughly matches the expertise level of the average human competitor. Let's stop there for a moment and think about that. An AI that understands an English description mixed in with mathematical notation and codes up the solution as well as the average human competitor at least on the tasks given in this dataset. Phew, wow. So what is this wizardry and what is the key? Let's pop the hood and have a look together. Oh yes. One of the keys here is that it generates a ton of candidate programs and is able to filter them down to just a few promising solutions. And it can do this quickly and accurately. This is huge. Why? Because this means that the AI is able to have a quick look at a computer program and tell with a pretty high accuracy whether this will solve the given task or not. It has an intuition of sorts if you will. Now interestingly it also uses 41 billion parameters. 41 billion is tiny compared to OpenEAS GPT3 which has 175 billion. This means that currently AlphaCode uses a more compact neural network and it is possible that the number of parameters can be increased here to further improve the results. If we look at DeepMind's track record improving on these ideas I have no doubt that however amazing these results seem now who have really seen nothing yet. And wait, there is more. This is where I completely lost my papers. In the case that you see here it even learned to invent algorithms. A simple one mind you, this is DFS, a search algorithm that is taught in first year undergrad computer science courses but that does not matter. What matters is that this is an AI that can finally invent new things. Wow. A Google engineer with also a world class competitive programmer was also asked to have a look at these solutions and he was quite happy with the results. He said to one of the solutions that quote, it looks like it could easily be written by a human very impressive. Now clearly it is not perfect, some criticisms were also voiced. For instance, sometimes it forgets about variables that remain unused. Even that is very human like I must say. But do not think of this paper as the end of something. No no, this is but a stepping stone towards something much greater. And I will be honest, I can't even imagine what we will have just a couple more papers down the line. What a time to be alive. So, what would you use this for? What do you think will happen a couple papers down the line? I'd love to know. Let me know in the comments below. And if you are excited to hear about potential follow up papers to this, make sure to subscribe and hit the bell icon. So definitely not want to miss it when it comes. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Because they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir."}, {"start": 4.88, "end": 8.92, "text": " You will not believe the things you will see in this episode."}, {"start": 8.92, "end": 10.24, "text": " I promise."}, {"start": 10.24, "end": 18.36, "text": " Earlier we saw that OpenEIS GPT-3 and Codex AI techniques can be used to solve grade school"}, {"start": 18.36, "end": 25.88, "text": " level math brain teasers, then they were improved to be able to solve university level math"}, {"start": 25.88, "end": 27.36, "text": " questions."}, {"start": 27.36, "end": 32.0, "text": " Note that this technique required additional help to do that."}, {"start": 32.0, "end": 37.879999999999995, "text": " And a follow-up work could even take a crack at roughly a third of mathematical,"}, {"start": 37.879999999999995, "end": 41.12, "text": " Olympiad level math problems."}, {"start": 41.12, "end": 47.64, "text": " And now let's see what the excellent scientists at deep-mind have been up to in the meantime."}, {"start": 47.64, "end": 49.64, "text": " Check this out."}, {"start": 49.64, "end": 51.6, "text": " This is Alpha Code."}, {"start": 51.6, "end": 52.6, "text": " What is that?"}, {"start": 52.6, "end": 58.92, "text": " Well, in college, computer scientists learn how to program those computers."}, {"start": 58.92, "end": 65.4, "text": " Now deep-mind decided to instead teach the computers how to program themselves."}, {"start": 65.4, "end": 71.2, "text": " Wow, now here you see an absolute miracle in the making."}, {"start": 71.2, "end": 76.84, "text": " Look, here is the description of the problem and here is the solution."}, {"start": 76.84, "end": 84.16, "text": " Well, I hear you saying, Karoy, there is no solution here and you are indeed right, just"}, {"start": 84.16, "end": 85.68, "text": " give it a second."}, {"start": 85.68, "end": 94.24000000000001, "text": " Yes, now hold on to your papers and marvel at how this AI is coding up the solution right"}, {"start": 94.24000000000001, "end": 96.88, "text": " now in front of our eyes."}, {"start": 96.88, "end": 102.2, "text": " But that's nothing we can also ask what the neural network is looking at."}, {"start": 102.2, "end": 107.48, "text": " Check this out, it is peeping at different important parts of the problem statement and"}, {"start": 107.48, "end": 112.56, "text": " proceeds to write the solution taking these into consideration."}, {"start": 112.56, "end": 117.64, "text": " You see that it also looks at different parts of the previous code that it had written"}, {"start": 117.64, "end": 122.08, "text": " to make sure that the new additions are consistent with those."}, {"start": 122.08, "end": 125.84, "text": " Wow, that is absolutely amazing."}, {"start": 125.84, "end": 130.36, "text": " I almost feel like watching a human solve this problem."}, {"start": 130.36, "end": 134.52, "text": " Well, it solved this problem correctly."}, {"start": 134.52, "end": 135.52, "text": " Unbelievable."}, {"start": 135.52, "end": 138.72000000000003, "text": " So, how good is it?"}, {"start": 138.72000000000003, "end": 144.76000000000002, "text": " Well, it can solve about 34% of the problems in this dataset."}, {"start": 144.76000000000002, "end": 145.76000000000002, "text": " What does that mean?"}, {"start": 145.76000000000002, "end": 148.32000000000002, "text": " Is that good or bad?"}, {"start": 148.32000000000002, "end": 153.84, "text": " Now if we have been holding on to your paper so far, squeeze that paper because it means"}, {"start": 153.84, "end": 160.28000000000003, "text": " that it roughly matches the expertise level of the average human competitor."}, {"start": 160.28, "end": 163.4, "text": " Let's stop there for a moment and think about that."}, {"start": 163.4, "end": 170.4, "text": " An AI that understands an English description mixed in with mathematical notation and codes"}, {"start": 170.4, "end": 177.28, "text": " up the solution as well as the average human competitor at least on the tasks given in"}, {"start": 177.28, "end": 178.28, "text": " this dataset."}, {"start": 178.28, "end": 181.04, "text": " Phew, wow."}, {"start": 181.04, "end": 185.2, "text": " So what is this wizardry and what is the key?"}, {"start": 185.2, "end": 188.32, "text": " Let's pop the hood and have a look together."}, {"start": 188.32, "end": 190.0, "text": " Oh yes."}, {"start": 190.0, "end": 195.8, "text": " One of the keys here is that it generates a ton of candidate programs and is able to"}, {"start": 195.8, "end": 200.56, "text": " filter them down to just a few promising solutions."}, {"start": 200.56, "end": 204.88, "text": " And it can do this quickly and accurately."}, {"start": 204.88, "end": 206.28, "text": " This is huge."}, {"start": 206.28, "end": 207.68, "text": " Why?"}, {"start": 207.68, "end": 213.88, "text": " Because this means that the AI is able to have a quick look at a computer program and tell"}, {"start": 213.88, "end": 219.8, "text": " with a pretty high accuracy whether this will solve the given task or not."}, {"start": 219.8, "end": 223.72, "text": " It has an intuition of sorts if you will."}, {"start": 223.72, "end": 228.8, "text": " Now interestingly it also uses 41 billion parameters."}, {"start": 228.8, "end": 236.64000000000001, "text": " 41 billion is tiny compared to OpenEAS GPT3 which has 175 billion."}, {"start": 236.64000000000001, "end": 243.48000000000002, "text": " This means that currently AlphaCode uses a more compact neural network and it is possible"}, {"start": 243.48000000000002, "end": 249.28, "text": " that the number of parameters can be increased here to further improve the results."}, {"start": 249.28, "end": 255.56, "text": " If we look at DeepMind's track record improving on these ideas I have no doubt that however"}, {"start": 255.56, "end": 260.76, "text": " amazing these results seem now who have really seen nothing yet."}, {"start": 260.76, "end": 263.4, "text": " And wait, there is more."}, {"start": 263.4, "end": 266.24, "text": " This is where I completely lost my papers."}, {"start": 266.24, "end": 271.28, "text": " In the case that you see here it even learned to invent algorithms."}, {"start": 271.28, "end": 278.72, "text": " A simple one mind you, this is DFS, a search algorithm that is taught in first year undergrad"}, {"start": 278.72, "end": 282.88000000000005, "text": " computer science courses but that does not matter."}, {"start": 282.88000000000005, "end": 288.88000000000005, "text": " What matters is that this is an AI that can finally invent new things."}, {"start": 288.88000000000005, "end": 289.88000000000005, "text": " Wow."}, {"start": 289.88000000000005, "end": 297.12, "text": " A Google engineer with also a world class competitive programmer was also asked to have a look at"}, {"start": 297.12, "end": 301.16, "text": " these solutions and he was quite happy with the results."}, {"start": 301.16, "end": 306.76000000000005, "text": " He said to one of the solutions that quote, it looks like it could easily be written by"}, {"start": 306.76, "end": 309.68, "text": " a human very impressive."}, {"start": 309.68, "end": 314.59999999999997, "text": " Now clearly it is not perfect, some criticisms were also voiced."}, {"start": 314.59999999999997, "end": 320.52, "text": " For instance, sometimes it forgets about variables that remain unused."}, {"start": 320.52, "end": 324.0, "text": " Even that is very human like I must say."}, {"start": 324.0, "end": 328.32, "text": " But do not think of this paper as the end of something."}, {"start": 328.32, "end": 334.15999999999997, "text": " No no, this is but a stepping stone towards something much greater."}, {"start": 334.16, "end": 340.12, "text": " And I will be honest, I can't even imagine what we will have just a couple more papers"}, {"start": 340.12, "end": 341.6, "text": " down the line."}, {"start": 341.6, "end": 343.08000000000004, "text": " What a time to be alive."}, {"start": 343.08000000000004, "end": 346.20000000000005, "text": " So, what would you use this for?"}, {"start": 346.20000000000005, "end": 349.40000000000003, "text": " What do you think will happen a couple papers down the line?"}, {"start": 349.40000000000003, "end": 350.8, "text": " I'd love to know."}, {"start": 350.8, "end": 353.12, "text": " Let me know in the comments below."}, {"start": 353.12, "end": 359.20000000000005, "text": " And if you are excited to hear about potential follow up papers to this, make sure to subscribe"}, {"start": 359.20000000000005, "end": 361.52000000000004, "text": " and hit the bell icon."}, {"start": 361.52, "end": 364.52, "text": " So definitely not want to miss it when it comes."}, {"start": 364.52, "end": 367.96, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 367.96, "end": 373.91999999999996, "text": " If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 373.91999999999996, "end": 380.91999999999996, "text": " They recently launched Quadro RTX 6000, RTX 8000 and V100 instances."}, {"start": 380.91999999999996, "end": 387.52, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and"}, {"start": 387.52, "end": 388.52, "text": " Azure."}, {"start": 388.52, "end": 393.84, "text": " Because they are the only Cloud service with 48GB RTX 8000."}, {"start": 393.84, "end": 400.24, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 400.24, "end": 402.03999999999996, "text": " workstations or servers."}, {"start": 402.03999999999996, "end": 407.4, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU"}, {"start": 407.4, "end": 408.76, "text": " instances today."}, {"start": 408.76, "end": 413.47999999999996, "text": " Our thanks to Lambda for their longstanding support and for helping us make better videos"}, {"start": 413.47999999999996, "end": 414.47999999999996, "text": " for you."}, {"start": 414.48, "end": 418.24, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=KeepnvtICWo
NVIDIA's Magical AI Speaks Using Your Voice! 🙊
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion" is available here: https://research.nvidia.com/publication/2017-07_Audio-Driven-Facial-Animation Details about #Audio2Face are available here: https://www.nvidia.com/en-us/omniverse/apps/audio2face/ https://docs.omniverse.nvidia.com/app_audio2face/app_audio2face/overview.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/vectors/triangles-polygon-color-pink-1430105/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, through the power of AI research, we are going to see how easily we can ask virtual characters to not only say what we say, but we can even become an art director and ask them to add some emotion to it. You and your men will do that. You have to go in and out very quick. Are those Eurasian footwear cowboy chaps or jolly earth-moving headgear? Probably the most important thing we need to do is to bring the country together and one of the skills that I bring to bear. So, how does this happen? Well, in-goes, what we say, for instance, me, uttering, dear Fellow Scholars, or anything else. And here's the key. We can also specify the emotional state of the character and the AI does the rest. That is absolutely amazing, but it gets even more amazing. Now, hold onto your papers and look. Yes, that's right, this was possible back in 2017, approximately 400 too-minute papers episodes ago. And whenever I showcase results like this, I always get the question from you Fellow Scholars asking, yes, this all looks great, but when do I get to use this? And the answer is, right now. Why? Because Nvidia has released audio to face a collection of AI techniques that we can use to perform this quickly and easily. Look, we can record our voice, live, and have a virtual character say what we are seeing. But it doesn't stop there, it also has three amazing features. One, we can even perform a face swap not only between humanoid characters, but my goodness, even from a humanoid to, for instance, a rhino. Now, that's something. I love it, but wait, there is more. There is this. And this too. Two, we can still specify emotions like anger, sadness, and excitement, and the virtual character will perform that for us. We only need to provide our voice no more acting skills required. In my opinion, this will be a gutsynt in any kind of digital media, computer games, or even when meeting our friends in a virtual space. Three, the usability of this technique is out of this world. For instance, it does not eat a great deal of resources, so we can easily run multiple instances of it at the same time. This is a wonderful usability feature. One of many that really makes or breaks when it comes to a new technique being used in the industry or not. An aspect not to be underestimated. And here is another usability feature. It works well with Unreal Engine's Meta human. This is a piece of software that can create virtual humans. And with that, we can not only create these virtual humans, but become the voice actors for them without having to hire a bunch of animators. How cool is that? Now, I believe this is an earlier version of Meta human. Here is the newer one. Wow, way better. Just imagine how cool it would be to voice these characters automatically. Now, the important lesson is that this was possible in a paper in 2017, and now, in a few years, it has vastly improved so much so that it is now out there deployed in a real product that we can use right now. That is a powerful democratizing force for computer animation. So, yes, the papers that you see here are real, as real as it gets, and this tech transfer can often occur in just a few years' time, in some other cases, even quicker. What a time to be alive. So, what would you use this for? I'd love to know what you think, so please let me know in the comments below. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text, or even generate it. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers, or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.72, "end": 11.84, "text": " Today, through the power of AI research, we are going to see how easily we can ask virtual"}, {"start": 11.84, "end": 19.28, "text": " characters to not only say what we say, but we can even become an art director and ask them"}, {"start": 19.28, "end": 21.28, "text": " to add some emotion to it."}, {"start": 21.92, "end": 26.88, "text": " You and your men will do that. You have to go in and out very quick. Are those"}, {"start": 26.88, "end": 30.64, "text": " Eurasian footwear cowboy chaps or jolly earth-moving headgear?"}, {"start": 32.24, "end": 35.6, "text": " Probably the most important thing we need to do is to bring the country together"}, {"start": 36.56, "end": 39.84, "text": " and one of the skills that I bring to bear."}, {"start": 39.84, "end": 45.2, "text": " So, how does this happen? Well, in-goes, what we say, for instance,"}, {"start": 45.2, "end": 52.8, "text": " me, uttering, dear Fellow Scholars, or anything else. And here's the key. We can also specify"}, {"start": 52.8, "end": 57.599999999999994, "text": " the emotional state of the character and the AI does the rest."}, {"start": 58.48, "end": 66.96, "text": " That is absolutely amazing, but it gets even more amazing. Now, hold onto your papers and look."}, {"start": 67.75999999999999, "end": 75.75999999999999, "text": " Yes, that's right, this was possible back in 2017, approximately 400 too-minute papers"}, {"start": 75.75999999999999, "end": 82.32, "text": " episodes ago. And whenever I showcase results like this, I always get the question from you Fellow"}, {"start": 82.32, "end": 89.75999999999999, "text": " Scholars asking, yes, this all looks great, but when do I get to use this? And the answer is,"}, {"start": 90.39999999999999, "end": 98.39999999999999, "text": " right now. Why? Because Nvidia has released audio to face a collection of AI techniques"}, {"start": 98.39999999999999, "end": 105.19999999999999, "text": " that we can use to perform this quickly and easily. Look, we can record our voice,"}, {"start": 105.2, "end": 112.96000000000001, "text": " live, and have a virtual character say what we are seeing. But it doesn't stop there, it also has"}, {"start": 112.96000000000001, "end": 121.2, "text": " three amazing features. One, we can even perform a face swap not only between humanoid characters,"}, {"start": 121.2, "end": 130.96, "text": " but my goodness, even from a humanoid to, for instance, a rhino. Now, that's something. I love it,"}, {"start": 130.96, "end": 138.24, "text": " but wait, there is more. There is this. And this too."}, {"start": 140.08, "end": 147.44, "text": " Two, we can still specify emotions like anger, sadness, and excitement, and the virtual"}, {"start": 147.44, "end": 154.64000000000001, "text": " character will perform that for us. We only need to provide our voice no more acting skills required."}, {"start": 154.64, "end": 162.23999999999998, "text": " In my opinion, this will be a gutsynt in any kind of digital media, computer games, or even when"}, {"start": 162.23999999999998, "end": 169.92, "text": " meeting our friends in a virtual space. Three, the usability of this technique is out of this world."}, {"start": 170.32, "end": 176.79999999999998, "text": " For instance, it does not eat a great deal of resources, so we can easily run multiple instances"}, {"start": 176.79999999999998, "end": 184.23999999999998, "text": " of it at the same time. This is a wonderful usability feature. One of many that really makes or"}, {"start": 184.24, "end": 192.08, "text": " breaks when it comes to a new technique being used in the industry or not. An aspect not to be underestimated."}, {"start": 192.08, "end": 200.0, "text": " And here is another usability feature. It works well with Unreal Engine's Meta human. This is a"}, {"start": 200.0, "end": 206.4, "text": " piece of software that can create virtual humans. And with that, we can not only create these virtual"}, {"start": 206.4, "end": 215.12, "text": " humans, but become the voice actors for them without having to hire a bunch of animators. How cool"}, {"start": 215.12, "end": 221.68, "text": " is that? Now, I believe this is an earlier version of Meta human. Here is the newer one."}, {"start": 222.64000000000001, "end": 229.76, "text": " Wow, way better. Just imagine how cool it would be to voice these characters automatically."}, {"start": 229.76, "end": 238.23999999999998, "text": " Now, the important lesson is that this was possible in a paper in 2017, and now, in a few years,"}, {"start": 238.23999999999998, "end": 245.2, "text": " it has vastly improved so much so that it is now out there deployed in a real product that we"}, {"start": 245.2, "end": 253.35999999999999, "text": " can use right now. That is a powerful democratizing force for computer animation. So, yes,"}, {"start": 253.36, "end": 260.8, "text": " the papers that you see here are real, as real as it gets, and this tech transfer can often occur"}, {"start": 260.8, "end": 268.32, "text": " in just a few years' time, in some other cases, even quicker. What a time to be alive. So,"}, {"start": 268.32, "end": 273.76, "text": " what would you use this for? I'd love to know what you think, so please let me know in the"}, {"start": 273.76, "end": 281.04, "text": " comments below. This episode has been supported by CoHear AI. CoHear builds large language models"}, {"start": 281.04, "end": 286.8, "text": " and makes them available through an API so businesses can add advanced language understanding"}, {"start": 286.8, "end": 294.32000000000005, "text": " to their system or app quickly with just one line of code. You can use your own data, whether it's"}, {"start": 294.32000000000005, "end": 301.04, "text": " text from customer service requests, legal contracts, or social media posts to create your own"}, {"start": 301.04, "end": 308.88, "text": " custom models to understand text, or even generate it. For instance, it can be used to automatically"}, {"start": 308.88, "end": 316.32, "text": " determine whether your messages are about your business hours, returns, or shipping, or it can"}, {"start": 316.32, "end": 322.8, "text": " be used to generate a list of possible sentences you can use for your product descriptions. Make sure"}, {"start": 322.8, "end": 329.92, "text": " to go to CoHear.ai slash papers, or click the link in the video description and give it a try today."}, {"start": 329.92, "end": 341.28000000000003, "text": " It's super easy to use. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=xpqMmmUkc_0
This New AI Can Find Your Dog In A Video! 🐩
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "MTTR - End-to-End Referring Video Object Segmentation with Multimodal Transformers" is available here: https://arxiv.org/abs/2111.14821 https://github.com/mttr2021/MTTR https://huggingface.co/spaces/MTTR/MTTR-Referring-Video-Object-Segmentation https://colab.research.google.com/drive/12p0jpSx3pJNfZk-y_L44yeHZlhsKVra-?usp=sharing 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-5953883/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Today, we are going to perform pose estimation with an amazing twist. You'll love it. But wait, what is pose estimation? Well, simple, a video of people goes in and the posture they are taking comes out. Now, you see here that previous techniques can already do this quite well even for videos. So, by today, the game has evolved. Just pose estimation is not that new. We need pose estimation plus something else. We need a little extra, if you will. So what can that little extra be? Let's look at three examples. For instance, one Nvidia has a highly advanced pose estimation technique that can refine its estimations by putting these humans into a virtual physics simulation. Without that, this kind of foot sliding often happens, but after the physics simulation, not anymore. As a result, it can understand even this explosive sprinting motion. This dynamic serve too, you name it, all of them are very close. So, what is this good for? Well, many things, but here is my favorite. If we can track the motion well, we can put it onto a virtual character so we ourselves can move around in a beautiful, imagined world. So, that was one. Pose estimation plus something extra, where the something extra is a physics simulation. Nice. Now, two, if we allow an AI to read the Wi-Fi signals bouncing around in a room, it can perform pose estimation even through walls. Kind of. Once again, pose estimation with something extra. And three, this is pose estimation with inertial sensors. This works when playing a friendly game of table tennis with a friend, or, wait, maybe a not so friendly game of table tennis. And this works really well, even in the dark. So all of these are pose estimation plus something extra. And now, let's have a look at this new paper which performs pose estimation plus, well, pose estimation as it seems. Okay, I don't see anything new here, really. So why is this work on two-minute papers? Well, now, hold on to your papers and check this out. Yes. Oh, yes. So what is this? Well, here's the twist we can give a piece of video to this AI, write a piece of text as you see up here, and it will not only find what we are looking for in the video, market, but then even track it over time. Now that is really cool. We just say what we want to be tracked over the video, and it will do it automatically. It can find the dog and the capybara. These are rather rudimentary descriptions, but it is by no means limited to that. Look, we can also say a man wearing a white shirt and blue shorts riding a surfboard. And yes, it can find it. And also we can add a description of the surfboard and it can tell which is which. And I like the tracking too. The scene has tons of high frequency changes, lots of occlusion, and it is still doing really well. Loving it. So I am thinking that this helps us take full advantage of the text descriptions. Look, we can ask it to mark the parrot and the cacotoo, and it knows which is which. So I can imagine more advanced applications where we need to find the appropriate kind of animal or object among many others, and we don't even know what to look for. Just say what you want, and it will find it. I also liked how this is done with a transformer neural network that can jointly process the text and video in one elegant solution. That is really cool. Now of course, every single one of you fellow scholars can see that this is not perfect, not even close. Depending on the input, tempero coherence issues may arise. These are the jumpy artifacts from frame to frame. But still, this is swift progress in machine learning research. We could only do this in 2018, and we were very happy about it. And just a couple of papers down the line, we just say what we want, and the AI will do it. And just imagine what we will have a couple more papers down the line. I cannot wait. So what would you use this for? Please let me know in the comments below. And wait, the source code, an interactive demo, and a notebook are available in the video description. So, you know what to do? Yes, let the experiments begin. This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 4.64, "end": 10.24, "text": " Today, we are going to perform pose estimation with an amazing twist."}, {"start": 10.24, "end": 11.52, "text": " You'll love it."}, {"start": 11.52, "end": 14.96, "text": " But wait, what is pose estimation?"}, {"start": 14.96, "end": 22.240000000000002, "text": " Well, simple, a video of people goes in and the posture they are taking comes out."}, {"start": 22.240000000000002, "end": 28.96, "text": " Now, you see here that previous techniques can already do this quite well even for videos."}, {"start": 28.96, "end": 32.76, "text": " So, by today, the game has evolved."}, {"start": 32.76, "end": 35.36, "text": " Just pose estimation is not that new."}, {"start": 35.36, "end": 39.52, "text": " We need pose estimation plus something else."}, {"start": 39.52, "end": 42.56, "text": " We need a little extra, if you will."}, {"start": 42.56, "end": 45.519999999999996, "text": " So what can that little extra be?"}, {"start": 45.519999999999996, "end": 47.6, "text": " Let's look at three examples."}, {"start": 47.6, "end": 54.7, "text": " For instance, one Nvidia has a highly advanced pose estimation technique that can refine its"}, {"start": 54.7, "end": 61.02, "text": " estimations by putting these humans into a virtual physics simulation."}, {"start": 61.02, "end": 67.62, "text": " Without that, this kind of foot sliding often happens, but after the physics simulation,"}, {"start": 67.62, "end": 69.02000000000001, "text": " not anymore."}, {"start": 69.02000000000001, "end": 74.66, "text": " As a result, it can understand even this explosive sprinting motion."}, {"start": 74.66, "end": 79.7, "text": " This dynamic serve too, you name it, all of them are very close."}, {"start": 79.7, "end": 86.3, "text": " So, what is this good for? Well, many things, but here is my favorite."}, {"start": 86.3, "end": 92.86, "text": " If we can track the motion well, we can put it onto a virtual character so we ourselves"}, {"start": 92.86, "end": 96.58, "text": " can move around in a beautiful, imagined world."}, {"start": 96.58, "end": 98.7, "text": " So, that was one."}, {"start": 98.7, "end": 105.74000000000001, "text": " Pose estimation plus something extra, where the something extra is a physics simulation."}, {"start": 105.74000000000001, "end": 106.74000000000001, "text": " Nice."}, {"start": 106.74, "end": 114.22, "text": " Now, two, if we allow an AI to read the Wi-Fi signals bouncing around in a room, it can"}, {"start": 114.22, "end": 119.3, "text": " perform pose estimation even through walls."}, {"start": 119.3, "end": 120.3, "text": " Kind of."}, {"start": 120.3, "end": 124.69999999999999, "text": " Once again, pose estimation with something extra."}, {"start": 124.69999999999999, "end": 130.34, "text": " And three, this is pose estimation with inertial sensors."}, {"start": 130.34, "end": 137.74, "text": " This works when playing a friendly game of table tennis with a friend, or, wait, maybe"}, {"start": 137.74, "end": 140.98000000000002, "text": " a not so friendly game of table tennis."}, {"start": 140.98000000000002, "end": 144.78, "text": " And this works really well, even in the dark."}, {"start": 144.78, "end": 150.58, "text": " So all of these are pose estimation plus something extra."}, {"start": 150.58, "end": 157.86, "text": " And now, let's have a look at this new paper which performs pose estimation plus, well,"}, {"start": 157.86, "end": 160.3, "text": " pose estimation as it seems."}, {"start": 160.3, "end": 163.54000000000002, "text": " Okay, I don't see anything new here, really."}, {"start": 163.54000000000002, "end": 166.66000000000003, "text": " So why is this work on two-minute papers?"}, {"start": 166.66000000000003, "end": 171.62, "text": " Well, now, hold on to your papers and check this out."}, {"start": 171.62, "end": 172.62, "text": " Yes."}, {"start": 172.62, "end": 174.26000000000002, "text": " Oh, yes."}, {"start": 174.26000000000002, "end": 175.86, "text": " So what is this?"}, {"start": 175.86, "end": 182.54000000000002, "text": " Well, here's the twist we can give a piece of video to this AI, write a piece of text"}, {"start": 182.54000000000002, "end": 189.82000000000002, "text": " as you see up here, and it will not only find what we are looking for in the video, market,"}, {"start": 189.82, "end": 193.22, "text": " but then even track it over time."}, {"start": 193.22, "end": 195.06, "text": " Now that is really cool."}, {"start": 195.06, "end": 201.14, "text": " We just say what we want to be tracked over the video, and it will do it automatically."}, {"start": 201.14, "end": 204.78, "text": " It can find the dog and the capybara."}, {"start": 204.78, "end": 210.62, "text": " These are rather rudimentary descriptions, but it is by no means limited to that."}, {"start": 210.62, "end": 218.74, "text": " Look, we can also say a man wearing a white shirt and blue shorts riding a surfboard."}, {"start": 218.74, "end": 221.98000000000002, "text": " And yes, it can find it."}, {"start": 221.98000000000002, "end": 228.54000000000002, "text": " And also we can add a description of the surfboard and it can tell which is which."}, {"start": 228.54000000000002, "end": 230.5, "text": " And I like the tracking too."}, {"start": 230.5, "end": 236.82000000000002, "text": " The scene has tons of high frequency changes, lots of occlusion, and it is still doing"}, {"start": 236.82000000000002, "end": 238.22, "text": " really well."}, {"start": 238.22, "end": 239.22, "text": " Loving it."}, {"start": 239.22, "end": 245.02, "text": " So I am thinking that this helps us take full advantage of the text descriptions."}, {"start": 245.02, "end": 252.78, "text": " Look, we can ask it to mark the parrot and the cacotoo, and it knows which is which."}, {"start": 252.78, "end": 258.46000000000004, "text": " So I can imagine more advanced applications where we need to find the appropriate kind"}, {"start": 258.46000000000004, "end": 265.22, "text": " of animal or object among many others, and we don't even know what to look for."}, {"start": 265.22, "end": 268.18, "text": " Just say what you want, and it will find it."}, {"start": 268.18, "end": 273.7, "text": " I also liked how this is done with a transformer neural network that can jointly process the"}, {"start": 273.7, "end": 277.86, "text": " text and video in one elegant solution."}, {"start": 277.86, "end": 279.62, "text": " That is really cool."}, {"start": 279.62, "end": 286.06, "text": " Now of course, every single one of you fellow scholars can see that this is not perfect,"}, {"start": 286.06, "end": 287.46, "text": " not even close."}, {"start": 287.46, "end": 291.62, "text": " Depending on the input, tempero coherence issues may arise."}, {"start": 291.62, "end": 294.78, "text": " These are the jumpy artifacts from frame to frame."}, {"start": 294.78, "end": 299.5, "text": " But still, this is swift progress in machine learning research."}, {"start": 299.5, "end": 305.14, "text": " We could only do this in 2018, and we were very happy about it."}, {"start": 305.14, "end": 310.26, "text": " And just a couple of papers down the line, we just say what we want, and the AI will do"}, {"start": 310.26, "end": 311.26, "text": " it."}, {"start": 311.26, "end": 315.9, "text": " And just imagine what we will have a couple more papers down the line."}, {"start": 315.9, "end": 317.98, "text": " I cannot wait."}, {"start": 317.98, "end": 320.66, "text": " So what would you use this for?"}, {"start": 320.66, "end": 322.9, "text": " Please let me know in the comments below."}, {"start": 322.9, "end": 329.06, "text": " And wait, the source code, an interactive demo, and a notebook are available in the"}, {"start": 329.06, "end": 330.06, "text": " video description."}, {"start": 330.06, "end": 332.42, "text": " So, you know what to do?"}, {"start": 332.42, "end": 335.54, "text": " Yes, let the experiments begin."}, {"start": 335.54, "end": 338.7, "text": " This video has been supported by weights and biases."}, {"start": 338.7, "end": 343.82, "text": " Check out the recent offering fully connected, a place where they bring machine learning"}, {"start": 343.82, "end": 350.42, "text": " practitioners together to share and discuss their ideas, learn from industry leaders,"}, {"start": 350.42, "end": 353.38, "text": " and even collaborate on projects together."}, {"start": 353.38, "end": 358.26, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by"}, {"start": 358.26, "end": 362.82, "text": " the series, but don't really know where to start."}, {"start": 362.82, "end": 364.34, "text": " And here it is."}, {"start": 364.34, "end": 370.02, "text": " Fully connected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 370.02, "end": 373.9, "text": " get your papers accepted to a conference, and more."}, {"start": 373.9, "end": 380.26, "text": " Make sure to visit them through wnb.me slash papers, or just click the link in the video"}, {"start": 380.26, "end": 381.26, "text": " description."}, {"start": 381.26, "end": 386.46, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better"}, {"start": 386.46, "end": 387.46, "text": " videos for you."}, {"start": 387.46, "end": 391.46, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=v9aOsn8a-z4
Is Simulating A Jelly Sandwich Possible? 🦑
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Accelerated complex-step finite difference for expedient deformable simulation" is available here: http://www.cad.zju.edu.cn/home/weiweixu/wwxu2019.files/acf.pdf https://dl.acm.org/doi/10.1145/3355089.3356493 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Efei. Today, we are going to torment a virtual armadillo, become happy, or sad, depending on which way we are bending, create a ton of jelly sandwiches, design a crazy bridge, and more. So, what is going on here? Well, this new paper helps us enhance our physics simulation programs by improving how we evaluate derivatives. Derivatives describe how things change. Now, of course, evaluating derivatives is not new. It's not even old. It is ancient. Here you see an ancient technique for this, and it works well most of the time, but... Whoa! Well, this simulation blew up in our face, so, yes, it may work well most of the time, but, unfortunately, not here. Now, if you're wondering what should be happening in this scene, here is the reference simulation that showcases what should have happened. Yes, this is the famous pastime of the computer graphics researcher tormenting virtual objects basically all day long. So, I hope you're enjoying this reference simulation, because this is a great reference simulation. It is as reference as a simulation can get. Now, I hope you know what's coming. Hold on to your papers, because this is not the reference simulation. What you see here is the new technique described in this paper. Absolutely amazing. Actually, let's compare the two. This is the reference simulation for real this time. And this is the new complex step finite difference method. The two are so close, they are essentially the same. I love it. So good. Now, if this comparison made you hungry, of course, we can proceed to the jelly sandwiches. Here is the same scene simulated with a bunch of previous techniques. And, my goodness, all of them look different. So, which of these jelly sandwiches is better? Well, the new technique is better, because this is the only one that preserves volume properly. This is the one that gets us the most jelly. With each of the other methods, the jelly either reacts incorrectly or at the end of the simulation, there is less jelly than the amount we started with. Now, you're probably asking, is this really possible? And the answer is yes, yes it is. What's more, this is not only possible, but this is a widespread problem in physics simulation. Our seasoned fellow scholars had seen this problem in many previous episodes. For instance, here is one with the tragedy of the disappearing bunnies. And preserving the volume of the simulated materials is not only useful for jelly sandwiches. It is also useful for doing extreme yoga. Look, here are a bunch of previous techniques trying to simulate this. And, what do we see? Extreme bending, extreme bending, and even more extreme bending. Good, I guess. Well, not quite. This yoga shouldn't be nearly as extreme as we see here. The new technique reveals that this kind of bending shouldn't happen given these material properties. And wait, here comes one of my favorites. The new technique can also deal with this crazy example. Look, a nice little virtual hyperelastic material where the bending energy changes depending on the bending orientation, revealing a secret. Or two secrets, as you see, it does not like bending to the right so much, but bending to the left, now we're talking. And it can also help us perform inverse design problems. For instance, here we have a hyperelastic bridge built from over 20,000 parts. And here we can design what vibration frequencies should be present when the wind blows at our bridge. And here comes the coolest part we can choose this in advance. And then the new technique quickly finds the suitable geometry that will showcase the prescribed vibration types. And it pretty much converges after 4 to 6 Newton iterations. What does that mean? Yes, it means that the technique comes up with an initial guess and it needs to refine it only 4 to 6 times until it comes up with an excellent solution. So, better hyperelastic simulations quickly and conveniently? Yes, please, sign me up right now. And what would you use this for? Let me know in the comments below. Percepti Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com slash papers and start using their system for free today. Our thanks to perceptilabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Efei."}, {"start": 4.64, "end": 9.08, "text": " Today, we are going to torment a virtual armadillo,"}, {"start": 9.08, "end": 14.16, "text": " become happy, or sad, depending on which way we are bending,"}, {"start": 14.16, "end": 20.56, "text": " create a ton of jelly sandwiches, design a crazy bridge, and more."}, {"start": 20.56, "end": 23.0, "text": " So, what is going on here?"}, {"start": 23.0, "end": 28.16, "text": " Well, this new paper helps us enhance our physics simulation programs"}, {"start": 28.16, "end": 31.68, "text": " by improving how we evaluate derivatives."}, {"start": 31.68, "end": 35.12, "text": " Derivatives describe how things change."}, {"start": 35.12, "end": 38.8, "text": " Now, of course, evaluating derivatives is not new."}, {"start": 38.8, "end": 40.64, "text": " It's not even old."}, {"start": 40.64, "end": 42.24, "text": " It is ancient."}, {"start": 42.24, "end": 44.879999999999995, "text": " Here you see an ancient technique for this,"}, {"start": 44.879999999999995, "end": 48.32, "text": " and it works well most of the time, but..."}, {"start": 48.32, "end": 49.44, "text": " Whoa!"}, {"start": 49.44, "end": 56.480000000000004, "text": " Well, this simulation blew up in our face, so, yes, it may work well most of the time,"}, {"start": 56.48, "end": 59.279999999999994, "text": " but, unfortunately, not here."}, {"start": 59.279999999999994, "end": 63.12, "text": " Now, if you're wondering what should be happening in this scene,"}, {"start": 63.12, "end": 68.0, "text": " here is the reference simulation that showcases what should have happened."}, {"start": 68.0, "end": 72.56, "text": " Yes, this is the famous pastime of the computer graphics researcher"}, {"start": 72.56, "end": 76.96, "text": " tormenting virtual objects basically all day long."}, {"start": 76.96, "end": 80.56, "text": " So, I hope you're enjoying this reference simulation,"}, {"start": 80.56, "end": 83.67999999999999, "text": " because this is a great reference simulation."}, {"start": 83.68, "end": 87.76, "text": " It is as reference as a simulation can get."}, {"start": 87.76, "end": 90.0, "text": " Now, I hope you know what's coming."}, {"start": 90.0, "end": 94.32000000000001, "text": " Hold on to your papers, because this is not the reference simulation."}, {"start": 94.32000000000001, "end": 98.80000000000001, "text": " What you see here is the new technique described in this paper."}, {"start": 98.80000000000001, "end": 100.64000000000001, "text": " Absolutely amazing."}, {"start": 100.64000000000001, "end": 103.12, "text": " Actually, let's compare the two."}, {"start": 103.12, "end": 107.60000000000001, "text": " This is the reference simulation for real this time."}, {"start": 107.60000000000001, "end": 111.68, "text": " And this is the new complex step finite difference method."}, {"start": 111.68, "end": 116.32000000000001, "text": " The two are so close, they are essentially the same."}, {"start": 116.32000000000001, "end": 118.48, "text": " I love it. So good."}, {"start": 118.48, "end": 124.80000000000001, "text": " Now, if this comparison made you hungry, of course, we can proceed to the jelly sandwiches."}, {"start": 124.80000000000001, "end": 129.84, "text": " Here is the same scene simulated with a bunch of previous techniques."}, {"start": 129.84, "end": 134.48000000000002, "text": " And, my goodness, all of them look different."}, {"start": 134.48000000000002, "end": 138.24, "text": " So, which of these jelly sandwiches is better?"}, {"start": 138.24, "end": 144.64000000000001, "text": " Well, the new technique is better, because this is the only one that preserves volume properly."}, {"start": 144.64000000000001, "end": 147.44, "text": " This is the one that gets us the most jelly."}, {"start": 147.44, "end": 152.08, "text": " With each of the other methods, the jelly either reacts incorrectly"}, {"start": 152.08, "end": 157.68, "text": " or at the end of the simulation, there is less jelly than the amount we started with."}, {"start": 158.48000000000002, "end": 162.24, "text": " Now, you're probably asking, is this really possible?"}, {"start": 162.24, "end": 165.12, "text": " And the answer is yes, yes it is."}, {"start": 165.12, "end": 171.28, "text": " What's more, this is not only possible, but this is a widespread problem in physics simulation."}, {"start": 171.28, "end": 176.16, "text": " Our seasoned fellow scholars had seen this problem in many previous episodes."}, {"start": 176.16, "end": 181.28, "text": " For instance, here is one with the tragedy of the disappearing bunnies."}, {"start": 181.28, "end": 187.76, "text": " And preserving the volume of the simulated materials is not only useful for jelly sandwiches."}, {"start": 187.76, "end": 191.04000000000002, "text": " It is also useful for doing extreme yoga."}, {"start": 191.04, "end": 196.32, "text": " Look, here are a bunch of previous techniques trying to simulate this."}, {"start": 196.32, "end": 197.92, "text": " And, what do we see?"}, {"start": 197.92, "end": 203.76, "text": " Extreme bending, extreme bending, and even more extreme bending."}, {"start": 203.76, "end": 205.28, "text": " Good, I guess."}, {"start": 205.28, "end": 207.2, "text": " Well, not quite."}, {"start": 207.2, "end": 211.6, "text": " This yoga shouldn't be nearly as extreme as we see here."}, {"start": 211.6, "end": 215.84, "text": " The new technique reveals that this kind of bending shouldn't happen"}, {"start": 215.84, "end": 218.16, "text": " given these material properties."}, {"start": 218.16, "end": 221.76, "text": " And wait, here comes one of my favorites."}, {"start": 221.76, "end": 225.28, "text": " The new technique can also deal with this crazy example."}, {"start": 225.28, "end": 230.16, "text": " Look, a nice little virtual hyperelastic material"}, {"start": 230.16, "end": 237.2, "text": " where the bending energy changes depending on the bending orientation, revealing a secret."}, {"start": 237.2, "end": 242.72, "text": " Or two secrets, as you see, it does not like bending to the right so much,"}, {"start": 242.72, "end": 246.16, "text": " but bending to the left, now we're talking."}, {"start": 246.16, "end": 250.79999999999998, "text": " And it can also help us perform inverse design problems."}, {"start": 250.79999999999998, "end": 257.68, "text": " For instance, here we have a hyperelastic bridge built from over 20,000 parts."}, {"start": 257.68, "end": 264.56, "text": " And here we can design what vibration frequencies should be present when the wind blows at our bridge."}, {"start": 264.56, "end": 269.44, "text": " And here comes the coolest part we can choose this in advance."}, {"start": 269.44, "end": 274.15999999999997, "text": " And then the new technique quickly finds the suitable geometry"}, {"start": 274.16, "end": 277.20000000000005, "text": " that will showcase the prescribed vibration types."}, {"start": 277.20000000000005, "end": 283.04, "text": " And it pretty much converges after 4 to 6 Newton iterations."}, {"start": 283.04, "end": 284.56, "text": " What does that mean?"}, {"start": 284.56, "end": 288.72, "text": " Yes, it means that the technique comes up with an initial guess"}, {"start": 288.72, "end": 293.12, "text": " and it needs to refine it only 4 to 6 times"}, {"start": 293.12, "end": 296.56, "text": " until it comes up with an excellent solution."}, {"start": 296.56, "end": 302.16, "text": " So, better hyperelastic simulations quickly and conveniently?"}, {"start": 302.16, "end": 305.20000000000005, "text": " Yes, please, sign me up right now."}, {"start": 305.20000000000005, "end": 307.52000000000004, "text": " And what would you use this for?"}, {"start": 307.52000000000004, "end": 309.36, "text": " Let me know in the comments below."}, {"start": 309.36, "end": 312.56, "text": " Percepti Labs is a visual API for TensorFlow"}, {"start": 312.56, "end": 317.44000000000005, "text": " carefully designed to make machine learning as intuitive as possible."}, {"start": 317.44000000000005, "end": 320.40000000000003, "text": " This gives you a faster way to build out models"}, {"start": 320.40000000000003, "end": 324.16, "text": " with more transparency into how your model is architected,"}, {"start": 324.16, "end": 327.36, "text": " how it performs, and how to debug it."}, {"start": 327.36, "end": 331.76000000000005, "text": " And it even generates visualizations for all the model variables"}, {"start": 331.76, "end": 335.28, "text": " and gives you recommendations both during modeling"}, {"start": 335.28, "end": 339.12, "text": " and training and does all this automatically."}, {"start": 339.12, "end": 343.44, "text": " I only wish I had a tool like this when I was working on my neural networks"}, {"start": 343.44, "end": 345.28, "text": " during my PhD years."}, {"start": 345.28, "end": 348.48, "text": " Visit perceptilabs.com slash papers"}, {"start": 348.48, "end": 351.52, "text": " and start using their system for free today."}, {"start": 351.52, "end": 354.15999999999997, "text": " Our thanks to perceptilabs for their support"}, {"start": 354.15999999999997, "end": 356.88, "text": " and for helping us make better videos for you."}, {"start": 356.88, "end": 359.12, "text": " Thanks for watching and for your generous support"}, {"start": 359.12, "end": 362.16, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9JZdAq8poww
Is OpenAI’s AI As Smart As A University Student? 🤖
❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The three papers are available here: Grade school math: https://openai.com/blog/grade-school-math/ University level math: https://arxiv.org/abs/2112.15594 Olympiad: https://openai.com/blog/formal-math/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photos/laptop-computer-green-screen-3781381/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Today we are going to have a little taste of how smart an AI can be these days. And it turns out these new AI's are not only smart enough to solve some grade school math problems, but get this, a new development can perhaps even take a crack at university level problems. Is that even possible or is this science fiction? Well, the answer is yes, it is possible, kind of. So why kind of? Well, let me try to explain. This is opening as work from October 2021. The goal is to have their AI understand these questions, understand the mathematics, and reason about a possible solution for grade school problems. Hmm, all right. So this means that the GPT-3 AI might be suitable for the substrate of the solution. What is that? GPT-3 is a technique that can understand text, try to finish your sentences, even build websites, and more. So can it even deal with these test questions? Let's see it together. Hold on to your papers because in goes a grade school level question, a little math brain teaser if you will, and outcomes. My goodness, is that right? Here, outcomes not only the correct solution to the question, but even the thought process that led to this solution. Imagine someone claiming that they had developed an AI discapable 10 years ago. This person would have been logged into an asylum. And now it is all there, right in front of our eyes. Absolutely amazing. Okay, but how amazing. Well, it can't get everything right all the time and not even close. If we do everything right, we can expect it to be correct about 35% of the time. Not perfect, not even close, but it is an amazing step forward. So what is the key here? Well, yes, you guys did right. The usual suspects. A big neural network, and lots of training data. The key numbers are 175 billion model parameters, and it needs to read a few thousand problems and their solutions as training samples. That is a big rocket, and lots of rocket fuel, if you will. But this is nothing compared to what is to come. Now, believe it or not, here is a follow-up paper from just a few months later, January 2022, that claims to do something even better. This is not from OpenAI, but it piggybacks on OpenAI technology, as you will see in a moment. And this work promises that it can solve university-level problems. And when I saw this in the paper, I thought, really, now great school materials, okay, that is a great leap forward, but solving university-level math exams, that's where the gloves come off. I am really curious to see what this can do. Let's have a look together. Some of these brain teasers smell very much like MIT to me, surprisingly short and elegant questions that often seem much easier than they are. However, all of these require a solid understanding of fundamentals and sometimes even a hint of creativity. Let's see, yes, that is, indeed right, these are MIT introductory course questions. I love it. So can it answer them? Now, if we have been holding on to your paper so far, squeeze that paper, and let's see the results together. My goodness, these are all correct. Flying colors, perfect accuracy, at least on these questions. This is swift progress in just a few months. Absolutely amazing. So how is this black magic done? Yes, I know that's what you're waiting for, so let's pop the hood and look inside together. Mm-hmm. All right, two key differences from OpenAI's GPT-3-based solution. Difference number one. It gets additional guidance. For instance, it is told what topic we are talking about, what code library to reach out for, and what is the definition of mathematical concepts, for instance, what is a singular value decomposition. I would argue that this is not too bad. Students typically get taught these things before the exam, too. In my opinion, the key is that this additional guidance is done in an automated manner. The more automated, the better. Difference number two. The substrate here is not GPT-3, at least not directly, but codex. Codex is OpenAI's GPT language model that was fine-tuned to be excellent at one thing. And that is writing computer programs or finishing your code. And as we've seen in a previous episode, it really is excellent. For instance, it can not only be asked to explain a piece of code, even if it is written in assembly, or create a pong game in 30 seconds, but we can also give it plain text descriptions about a space game and it will write it. Codex is super powerful, and now it can be used to solve previously unseen university level math problems. Now that is really something. And it can even generate a bunch of new questions, and these are bona fide real questions. Not just exchanging the numbers, the new questions often require completely different insights to solve these problems. A little creativity I see well done little AI. So how good are these? Well, according to human evaluators, they are almost as good as the ones written by other humans. And thus, these can even be used to provide more and more training data for such an AI, more fuel for that rocket, and good kind of fuel. Excellent. And it doesn't end there. In the meantime, as of February 2022, scientists at OpenAI are already working on a follow-up paper that solves no less than high school mathematical-olimpiate problems. These problems require a solid understanding of fundamentals, proper reasoning, and often even that is not enough. Many of these tasks put up a seemingly impenetrable wall and climbing the wall typically requires a real creative spark. Yes, this means that this can get quite tough. And their new method is doing really well at these, once again, not perfect, not even close. But it can solve about 30 to 40% of these tasks, and that is a remarkable hit rate. Now we see that all of these works are amazing, and they all have their own trade-offs. They are good and bad at different things, and have different requirements. And most of all, they all have their own limitations. Thus, none of these works should be thought of as an AI that just automatically does human level math. What we see now is that there is swift progress in this area, and amazing new papers are popping up, not every year, but pretty much every month. And this is an excellent place to apply the first law of papers, which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. So, what would you use this for? Please let me know in the comments below. I'd love to hear your ideas. And also, if you are excited by this kind of incredible progress in AR research, make sure to subscribe and hit the bell icon to not miss it when we cover these amazing new papers. This video has been supported by weights and biases. Look at this, they have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators, and more. Make sure to visit WNB.ME slash paper forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 4.8, "end": 11.44, "text": " Today we are going to have a little taste of how smart an AI can be these days."}, {"start": 11.44, "end": 18.96, "text": " And it turns out these new AI's are not only smart enough to solve some grade school math problems,"}, {"start": 18.96, "end": 26.240000000000002, "text": " but get this, a new development can perhaps even take a crack at university level problems."}, {"start": 26.24, "end": 30.159999999999997, "text": " Is that even possible or is this science fiction?"}, {"start": 30.159999999999997, "end": 34.16, "text": " Well, the answer is yes, it is possible, kind of."}, {"start": 34.96, "end": 44.16, "text": " So why kind of? Well, let me try to explain. This is opening as work from October 2021."}, {"start": 44.16, "end": 50.0, "text": " The goal is to have their AI understand these questions, understand the mathematics,"}, {"start": 50.0, "end": 57.44, "text": " and reason about a possible solution for grade school problems. Hmm, all right."}, {"start": 57.44, "end": 63.36, "text": " So this means that the GPT-3 AI might be suitable for the substrate of the solution."}, {"start": 64.0, "end": 72.4, "text": " What is that? GPT-3 is a technique that can understand text, try to finish your sentences,"}, {"start": 72.4, "end": 78.88, "text": " even build websites, and more. So can it even deal with these test questions?"}, {"start": 78.88, "end": 84.24, "text": " Let's see it together. Hold on to your papers because in goes a grade school level question,"}, {"start": 84.88, "end": 92.24, "text": " a little math brain teaser if you will, and outcomes. My goodness, is that right?"}, {"start": 92.96, "end": 99.6, "text": " Here, outcomes not only the correct solution to the question, but even the thought process that"}, {"start": 99.6, "end": 105.19999999999999, "text": " led to this solution. Imagine someone claiming that they had developed an AI"}, {"start": 105.2, "end": 110.64, "text": " discapable 10 years ago. This person would have been logged into an asylum."}, {"start": 111.36, "end": 116.96000000000001, "text": " And now it is all there, right in front of our eyes. Absolutely amazing."}, {"start": 117.52000000000001, "end": 124.72, "text": " Okay, but how amazing. Well, it can't get everything right all the time and not even close."}, {"start": 124.72, "end": 130.56, "text": " If we do everything right, we can expect it to be correct about 35% of the time."}, {"start": 130.56, "end": 136.0, "text": " Not perfect, not even close, but it is an amazing step forward."}, {"start": 136.8, "end": 144.16, "text": " So what is the key here? Well, yes, you guys did right. The usual suspects. A big neural network,"}, {"start": 144.16, "end": 152.8, "text": " and lots of training data. The key numbers are 175 billion model parameters, and it needs to read"}, {"start": 152.8, "end": 158.88, "text": " a few thousand problems and their solutions as training samples. That is a big rocket,"}, {"start": 158.88, "end": 165.28, "text": " and lots of rocket fuel, if you will. But this is nothing compared to what is to come."}, {"start": 166.0, "end": 171.84, "text": " Now, believe it or not, here is a follow-up paper from just a few months later,"}, {"start": 171.84, "end": 180.24, "text": " January 2022, that claims to do something even better. This is not from OpenAI, but it piggybacks"}, {"start": 180.24, "end": 187.04, "text": " on OpenAI technology, as you will see in a moment. And this work promises that it can solve"}, {"start": 187.04, "end": 193.84, "text": " university-level problems. And when I saw this in the paper, I thought, really, now great school"}, {"start": 193.84, "end": 201.28, "text": " materials, okay, that is a great leap forward, but solving university-level math exams, that's"}, {"start": 201.28, "end": 207.44, "text": " where the gloves come off. I am really curious to see what this can do. Let's have a look together."}, {"start": 209.44, "end": 216.0, "text": " Some of these brain teasers smell very much like MIT to me, surprisingly short and elegant"}, {"start": 216.0, "end": 222.56, "text": " questions that often seem much easier than they are. However, all of these require a solid"}, {"start": 222.56, "end": 231.28, "text": " understanding of fundamentals and sometimes even a hint of creativity. Let's see, yes, that is,"}, {"start": 231.28, "end": 238.72, "text": " indeed right, these are MIT introductory course questions. I love it. So can it answer them?"}, {"start": 239.36, "end": 245.76, "text": " Now, if we have been holding on to your paper so far, squeeze that paper, and let's see the results"}, {"start": 245.76, "end": 255.04, "text": " together. My goodness, these are all correct. Flying colors, perfect accuracy, at least on these"}, {"start": 255.04, "end": 263.36, "text": " questions. This is swift progress in just a few months. Absolutely amazing. So how is this black"}, {"start": 263.36, "end": 270.15999999999997, "text": " magic done? Yes, I know that's what you're waiting for, so let's pop the hood and look inside"}, {"start": 270.16, "end": 277.04, "text": " together. Mm-hmm. All right, two key differences from OpenAI's GPT-3-based solution."}, {"start": 277.68, "end": 284.64000000000004, "text": " Difference number one. It gets additional guidance. For instance, it is told what topic we are"}, {"start": 284.64000000000004, "end": 291.68, "text": " talking about, what code library to reach out for, and what is the definition of mathematical concepts,"}, {"start": 291.68, "end": 298.48, "text": " for instance, what is a singular value decomposition. I would argue that this is not too bad."}, {"start": 298.48, "end": 305.12, "text": " Students typically get taught these things before the exam, too. In my opinion, the key is that"}, {"start": 305.12, "end": 311.12, "text": " this additional guidance is done in an automated manner. The more automated, the better."}, {"start": 311.84000000000003, "end": 319.28000000000003, "text": " Difference number two. The substrate here is not GPT-3, at least not directly, but codex."}, {"start": 319.6, "end": 326.48, "text": " Codex is OpenAI's GPT language model that was fine-tuned to be excellent at one thing."}, {"start": 326.48, "end": 334.72, "text": " And that is writing computer programs or finishing your code. And as we've seen in a previous episode,"}, {"start": 334.72, "end": 341.36, "text": " it really is excellent. For instance, it can not only be asked to explain a piece of code,"}, {"start": 341.36, "end": 349.52000000000004, "text": " even if it is written in assembly, or create a pong game in 30 seconds, but we can also give it"}, {"start": 349.52, "end": 356.96, "text": " plain text descriptions about a space game and it will write it. Codex is super powerful,"}, {"start": 356.96, "end": 363.03999999999996, "text": " and now it can be used to solve previously unseen university level math problems."}, {"start": 363.68, "end": 369.68, "text": " Now that is really something. And it can even generate a bunch of new questions,"}, {"start": 369.68, "end": 376.08, "text": " and these are bona fide real questions. Not just exchanging the numbers, the new questions"}, {"start": 376.08, "end": 382.8, "text": " often require completely different insights to solve these problems. A little creativity I see"}, {"start": 383.59999999999997, "end": 390.47999999999996, "text": " well done little AI. So how good are these? Well, according to human evaluators,"}, {"start": 390.47999999999996, "end": 397.76, "text": " they are almost as good as the ones written by other humans. And thus, these can even be used"}, {"start": 397.76, "end": 404.47999999999996, "text": " to provide more and more training data for such an AI, more fuel for that rocket, and good"}, {"start": 404.48, "end": 411.84000000000003, "text": " kind of fuel. Excellent. And it doesn't end there. In the meantime, as of February 2022,"}, {"start": 411.84000000000003, "end": 419.68, "text": " scientists at OpenAI are already working on a follow-up paper that solves no less than high school"}, {"start": 419.68, "end": 426.32, "text": " mathematical-olimpiate problems. These problems require a solid understanding of fundamentals,"}, {"start": 426.32, "end": 433.52000000000004, "text": " proper reasoning, and often even that is not enough. Many of these tasks put up a seemingly"}, {"start": 433.52, "end": 439.52, "text": " impenetrable wall and climbing the wall typically requires a real creative spark."}, {"start": 439.52, "end": 446.56, "text": " Yes, this means that this can get quite tough. And their new method is doing really well at these,"}, {"start": 446.56, "end": 454.32, "text": " once again, not perfect, not even close. But it can solve about 30 to 40% of these tasks,"}, {"start": 454.32, "end": 460.4, "text": " and that is a remarkable hit rate. Now we see that all of these works are amazing,"}, {"start": 460.4, "end": 465.84, "text": " and they all have their own trade-offs. They are good and bad at different things,"}, {"start": 465.84, "end": 472.64, "text": " and have different requirements. And most of all, they all have their own limitations. Thus,"}, {"start": 472.64, "end": 478.71999999999997, "text": " none of these works should be thought of as an AI that just automatically does human level math."}, {"start": 478.71999999999997, "end": 485.28, "text": " What we see now is that there is swift progress in this area, and amazing new papers are popping up,"}, {"start": 485.28, "end": 492.88, "text": " not every year, but pretty much every month. And this is an excellent place to apply the first"}, {"start": 492.88, "end": 499.28, "text": " law of papers, which says that research is a process. Do not look at where we are, look at where we"}, {"start": 499.28, "end": 505.59999999999997, "text": " will be, two more papers down the line. So, what would you use this for? Please let me know in the"}, {"start": 505.59999999999997, "end": 511.35999999999996, "text": " comments below. I'd love to hear your ideas. And also, if you are excited by this kind of"}, {"start": 511.36, "end": 517.52, "text": " incredible progress in AR research, make sure to subscribe and hit the bell icon to not miss it"}, {"start": 517.52, "end": 524.0, "text": " when we cover these amazing new papers. This video has been supported by weights and biases."}, {"start": 524.0, "end": 530.08, "text": " Look at this, they have a great community forum that aims to make you the best machine learning"}, {"start": 530.08, "end": 536.32, "text": " engineer you can be. You see, I always get messages from you fellow scholars telling me that you"}, {"start": 536.32, "end": 544.0, "text": " have been inspired by the series, but don't really know where to start. And here it is. In this forum,"}, {"start": 544.0, "end": 550.72, "text": " you can share your projects, ask for advice, look for collaborators, and more. Make sure to visit"}, {"start": 550.72, "end": 558.96, "text": " WNB.ME slash paper forum and say hi or just click the link in the video description."}, {"start": 558.96, "end": 564.48, "text": " Our thanks to weights and biases for their long standing support and for helping us make better"}, {"start": 564.48, "end": 572.08, "text": " videos for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QrsxIa0JDi4
These Smoke Simulations Have A Catch! 💨
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "Spiral-Spectral Fluid Simulation" is available here: http://www.tkim.graphics/SPIRAL/SpiralSpectralFluids.pdf ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zorna Ifeher. After reading a physics textbook on the laws of fluid motion, with a little effort, we can make a virtual world come alive by writing a computer program that contains these laws resulting in beautiful fluid simulations like the one you see here. The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that computer hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable. And this new paper promises detailed spiral spectral, fluid, and smoke simulations. What does that mean? It means that the simulations can be run inside a torus, spheres, cylinders, you name it. But wait, is that really new when using traditional simulation techniques we can just enclose the smoke in all kinds of domain shapes where the simulation will take place? People have done this for decades now. Here is an example of that. So what is new here? Well, let's have a look at some results and hopefully find out together. Let's start with the details first. This is the new technique. Hmm, I like this one. This is a detailed simulation. Sure, I'll give it that. But we can already create detailed simulations with traditional techniques. So once again, what is new here? You know what? Actually, let's compare it to a traditional smoke simulation technique and give it the same amount of time to run and see what that looks like. Wow, that is a huge difference. And yes, believe it or not, the two simulations run in the same amount of time. So, yes, it creates detailed simulations. Checkmark. And it has not only the details, but it has other virtues too. Now, let's bring up the heat some more. This is a comparison to not an older classical technique, but a spherical spectral technique from 2019. Let's see how the new method fares against it. Well, they both look good. So maybe this new method is not so much better after... Wait a second. Ouch. The previous one blew up. And the new one, yes. This still keeps going. Such improvement in just about two years. So, it is not only fares, but it is robust too. That is super important for real world use. Details and robustness. Checkmark. Now, let's continue with the shape of the simulation domain. Yes, we can enclose the simulation within this domain where the spherical domain itself can be imagined as an impenetrable wall, but it doesn't have to be that way. Look, we can even open it up. Very good. OK, so it is fast. It is robust. It supports crazy simulation domain shapes and even better, it looks detailed. But, are these the right details? Is this just pleasing for the eye or is this really how smoke should behave? The authors tested that tool and now hold on to your papers and look. I could add the labels here, but does it really matter? The tool look almost exactly the same. Almost pixel perfect. By the way, here you go. So, wow, the list of positives just keeps on growing. But, we are experienced fellow scholars here, so let's continue interrogating this method. Does it work for different viscosities? At the risk of simplifying what is going on, the viscosity of a puff of smoke relates to how nimble the simulation is. And it can handle a variety of these physical parameters too. OK, next. Can it interact with other objects too? I ask because some techniques look great in a simple, empty simulation domain, but break down when placed into a real scene where a lot of other objects are moving around. Well, not this new one. That is a beautiful simulation. I love it. So, I am getting more and more convinced with each test. So, where is the catch? What is the price to be paid for all this? Let's have a look. For the quality of the simulations we get, it runs in a few seconds per frame. And it doesn't even need your graphics card, it runs on your processor. And even then this is blazing fast. And implementing this on the graphics card could very well put this into the real time domain. And boy, getting these beautiful smoke puffs in real time would be an amazing treat. So, once again, what is the price to be paid for this? Well, have a look. Aha, there it is. That is a steep price. Look, this needs tons of memory. Tens of gigabytes. No wonder this was run on the processor. This is because modern computer graphics card don't have nearly as much memory on board. So, what do we do? Well, don't despair not even for a second. We still have good news. And the good news is that there are earlier research works that explore compressing these data sets down. And it turns out their size can be decreased dramatically. A perfect direction for the next paper down the line. And what do you think? Let me know in the comments below what you would use this for. And just one or two more papers down the line. And maybe we will get these beautiful simulations in our virtual worlds in real time. I can't wait. I really cannot wait. What a time to be alive. Wets and biases provides tools to track your experiments in your deep learning projects using their system. You can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that Wets and Biases is free for all individuals, academics and open source projects. Make sure to visit them through WNB.com slash papers or just click the link in the video description. And you can get a free demo today. Our thanks to Wets and Biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zorna Ifeher."}, {"start": 5.0, "end": 23.0, "text": " After reading a physics textbook on the laws of fluid motion, with a little effort, we can make a virtual world come alive by writing a computer program that contains these laws resulting in beautiful fluid simulations like the one you see here."}, {"start": 23.0, "end": 38.0, "text": " The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that computer hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable."}, {"start": 38.0, "end": 46.0, "text": " And this new paper promises detailed spiral spectral, fluid, and smoke simulations."}, {"start": 46.0, "end": 56.0, "text": " What does that mean? It means that the simulations can be run inside a torus, spheres, cylinders, you name it."}, {"start": 56.0, "end": 67.0, "text": " But wait, is that really new when using traditional simulation techniques we can just enclose the smoke in all kinds of domain shapes where the simulation will take place?"}, {"start": 67.0, "end": 70.0, "text": " People have done this for decades now."}, {"start": 70.0, "end": 84.0, "text": " Here is an example of that. So what is new here? Well, let's have a look at some results and hopefully find out together. Let's start with the details first. This is the new technique."}, {"start": 84.0, "end": 97.0, "text": " Hmm, I like this one. This is a detailed simulation. Sure, I'll give it that. But we can already create detailed simulations with traditional techniques. So once again, what is new here?"}, {"start": 97.0, "end": 108.0, "text": " You know what? Actually, let's compare it to a traditional smoke simulation technique and give it the same amount of time to run and see what that looks like."}, {"start": 108.0, "end": 117.0, "text": " Wow, that is a huge difference. And yes, believe it or not, the two simulations run in the same amount of time."}, {"start": 117.0, "end": 128.0, "text": " So, yes, it creates detailed simulations. Checkmark. And it has not only the details, but it has other virtues too."}, {"start": 128.0, "end": 142.0, "text": " Now, let's bring up the heat some more. This is a comparison to not an older classical technique, but a spherical spectral technique from 2019. Let's see how the new method fares against it."}, {"start": 142.0, "end": 150.0, "text": " Well, they both look good. So maybe this new method is not so much better after..."}, {"start": 150.0, "end": 162.0, "text": " Wait a second. Ouch. The previous one blew up. And the new one, yes. This still keeps going. Such improvement in just about two years."}, {"start": 162.0, "end": 173.0, "text": " So, it is not only fares, but it is robust too. That is super important for real world use. Details and robustness. Checkmark."}, {"start": 173.0, "end": 191.0, "text": " Now, let's continue with the shape of the simulation domain. Yes, we can enclose the simulation within this domain where the spherical domain itself can be imagined as an impenetrable wall, but it doesn't have to be that way. Look, we can even open it up."}, {"start": 191.0, "end": 211.0, "text": " Very good. OK, so it is fast. It is robust. It supports crazy simulation domain shapes and even better, it looks detailed. But, are these the right details? Is this just pleasing for the eye or is this really how smoke should behave?"}, {"start": 211.0, "end": 228.0, "text": " The authors tested that tool and now hold on to your papers and look. I could add the labels here, but does it really matter? The tool look almost exactly the same. Almost pixel perfect. By the way, here you go."}, {"start": 228.0, "end": 242.0, "text": " So, wow, the list of positives just keeps on growing. But, we are experienced fellow scholars here, so let's continue interrogating this method. Does it work for different viscosities?"}, {"start": 242.0, "end": 260.0, "text": " At the risk of simplifying what is going on, the viscosity of a puff of smoke relates to how nimble the simulation is. And it can handle a variety of these physical parameters too. OK, next. Can it interact with other objects too?"}, {"start": 260.0, "end": 278.0, "text": " I ask because some techniques look great in a simple, empty simulation domain, but break down when placed into a real scene where a lot of other objects are moving around. Well, not this new one. That is a beautiful simulation. I love it."}, {"start": 278.0, "end": 302.0, "text": " So, I am getting more and more convinced with each test. So, where is the catch? What is the price to be paid for all this? Let's have a look. For the quality of the simulations we get, it runs in a few seconds per frame. And it doesn't even need your graphics card, it runs on your processor. And even then this is blazing fast."}, {"start": 302.0, "end": 314.0, "text": " And implementing this on the graphics card could very well put this into the real time domain. And boy, getting these beautiful smoke puffs in real time would be an amazing treat."}, {"start": 314.0, "end": 320.0, "text": " So, once again, what is the price to be paid for this? Well, have a look."}, {"start": 320.0, "end": 340.0, "text": " Aha, there it is. That is a steep price. Look, this needs tons of memory. Tens of gigabytes. No wonder this was run on the processor. This is because modern computer graphics card don't have nearly as much memory on board. So, what do we do?"}, {"start": 340.0, "end": 356.0, "text": " Well, don't despair not even for a second. We still have good news. And the good news is that there are earlier research works that explore compressing these data sets down. And it turns out their size can be decreased dramatically."}, {"start": 356.0, "end": 378.0, "text": " A perfect direction for the next paper down the line. And what do you think? Let me know in the comments below what you would use this for. And just one or two more papers down the line. And maybe we will get these beautiful simulations in our virtual worlds in real time. I can't wait. I really cannot wait."}, {"start": 378.0, "end": 392.0, "text": " What a time to be alive. Wets and biases provides tools to track your experiments in your deep learning projects using their system. You can create beautiful reports like this one to explain your findings to your colleagues better."}, {"start": 392.0, "end": 407.0, "text": " It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that Wets and Biases is free for all individuals, academics and open source projects."}, {"start": 407.0, "end": 417.0, "text": " Make sure to visit them through WNB.com slash papers or just click the link in the video description. And you can get a free demo today."}, {"start": 417.0, "end": 437.0, "text": " Our thanks to Wets and Biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=M0RuBETA2f4
A Repulsion Simulation! But Why? 🐰
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The "Repulsive Surfaces" and "Repulsive Curves" papers are available here: https://www.cs.cmu.edu/~kmcrane/Projects/RepulsiveSurfaces/index.html http://www.cs.cmu.edu/~kmcrane/Projects/RepulsiveCurves/index.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to learn why you should not even try to handcuff a computer graphics researcher. And it's not because they are criminal masterminds. No, no. So, what is it then? Well, if they have read this paper, handcuffing them will not work. See, they will get out easily. So, what is all this insanity here? Well, this paper is about simulating repulsion. Or to be more exact, computing repulsive forces on curves and surfaces. Now, of course, your first question is okay, but what is that good for? Well, believe it or not, it doesn't sound like it, but it is good for so many things, the applications just keep coming and coming. So, how does this work? Well, repulsion can prevent intersections and collisions. And look at that. If we run this process over time, it creates flows. In other words, it starts mingling with the geometry in interesting, and it turns out also useful ways. For instance, imagine that we have this tangled handcuff. And someone comes up to us and says they can untangle this object by molding. And not only that, but this person gets even more brazen. They say that they even do it gently. Or, in other words, without any intersections. No colliding or breaking is allowed. Well, then I say I don't believe a word of it, show me. And the person does exactly that. Using the repulsive force algorithm, we can find a way to untangle them. This seems like black magic. Also, with some adjustments to this process, and running it backwards in time, we can even start with a piece of geometry and compute an ideal shrink wrap for it. Even better when applied to point clouds, we can even create a full 3D geometry that represents these points. And get this. It works even if the point clouds are incomplete. Now, make no mistake, the topic of point cloud reconstruction has been studied for decades now. So much so that during my PhD years, I attended to well over 100 paper talks on this topic. And I am by no means an expert on this, not even close, but I can tell you this looks like a pretty good solution. And these applications emerge only as a tiny side effect of this algorithm. But it's not over, there is more. It can even fix bad mesh geometries, something that artists encounter in the world all the time. Loving it. And it can also create cool and crazy surfaces for your art installations. Now, so far we have applied repulsion to surfaces. But this concept can also be applied to curves as well, which opens up a whole new world of really cool applications. Let's start with the most important one. Do you know what happens when you put your wired earbuds into your pocket? Yes, of course. Exactly, this happens every time, right? And now, hold on to your papers, because it turns out it can even untangle your headphones without breaking them. Now, wait a second, if it can generate curves that don't intersect, maybe it could also be used for path planning for characters in a virtual world. Look, these folks intersect. But if they use this new technique for path planning, there should be no intersections. And yes, no intersections, thereby no collisions. Excellent. And if we feel like it, we can also start with a small piece of noodle inside a bunny and start growing it. Why not? And then we can marvel at how over time it starts to look like in testings. And it just still keeps growing and growing without touching the bunny. So cool. Now, onto more useful things. It can even be used to visualize social media connections and family trees in a compact manner. Or it can move muscle fibers around and the list just goes on. But you are experienced fellow scholars over here. So I hear you asking, well, wait a second, all these sounds quite trivial. Just apply repulsive forces to the entirety of the surfaces. And off we go. Why does this have to be a paper published at a prestigious conference? Well, actually, let's try this. If we do that, oh oh, this happens. Or this happens. The algorithm is not so trivial after all. And this is what I love about this paper. It proposes a simple idea. And this idea is simple, but not easy. If we think it is easy, this happens. But if we do it well, this happens. The paper describes in detail how to perform this so it works properly and provides a ton of immediate things it can be used to. This is not a product. This is an idea and a list of potential applications for it. In my opinion, this is basic academic research at its best. Bravo. I love it. And what would you use this for? Please let me know in the comments below. I'd love to hear your ideas. And until then, we will have no more tangled earbuds in our pockets. And we can thank Computer Graphics Research for that. What a time to be alive. This video has been supported by weights and biases. And being a machine learning researcher means doing tons of experiments and of course creating tons of data. But I am not looking for data. I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of open AI Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.ME-slash-paper-paste. Or just click the link in the video description. And try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 11.84, "text": " Today, we are going to learn why you should not even try to handcuff a computer graphics researcher."}, {"start": 11.84, "end": 15.280000000000001, "text": " And it's not because they are criminal masterminds."}, {"start": 15.280000000000001, "end": 16.240000000000002, "text": " No, no."}, {"start": 16.240000000000002, "end": 18.240000000000002, "text": " So, what is it then?"}, {"start": 18.240000000000002, "end": 23.28, "text": " Well, if they have read this paper, handcuffing them will not work."}, {"start": 23.28, "end": 26.240000000000002, "text": " See, they will get out easily."}, {"start": 26.240000000000002, "end": 29.52, "text": " So, what is all this insanity here?"}, {"start": 29.52, "end": 33.519999999999996, "text": " Well, this paper is about simulating repulsion."}, {"start": 33.519999999999996, "end": 40.08, "text": " Or to be more exact, computing repulsive forces on curves and surfaces."}, {"start": 40.08, "end": 45.84, "text": " Now, of course, your first question is okay, but what is that good for?"}, {"start": 45.84, "end": 52.239999999999995, "text": " Well, believe it or not, it doesn't sound like it, but it is good for so many things,"}, {"start": 52.239999999999995, "end": 55.6, "text": " the applications just keep coming and coming."}, {"start": 55.6, "end": 57.68, "text": " So, how does this work?"}, {"start": 57.68, "end": 63.36, "text": " Well, repulsion can prevent intersections and collisions."}, {"start": 63.36, "end": 65.36, "text": " And look at that."}, {"start": 65.36, "end": 69.76, "text": " If we run this process over time, it creates flows."}, {"start": 69.76, "end": 74.8, "text": " In other words, it starts mingling with the geometry in interesting,"}, {"start": 74.8, "end": 78.24, "text": " and it turns out also useful ways."}, {"start": 78.24, "end": 82.8, "text": " For instance, imagine that we have this tangled handcuff."}, {"start": 82.8, "end": 89.36, "text": " And someone comes up to us and says they can untangle this object by molding."}, {"start": 89.36, "end": 94.16, "text": " And not only that, but this person gets even more brazen."}, {"start": 94.16, "end": 97.2, "text": " They say that they even do it gently."}, {"start": 97.2, "end": 101.2, "text": " Or, in other words, without any intersections."}, {"start": 101.2, "end": 104.32, "text": " No colliding or breaking is allowed."}, {"start": 104.32, "end": 108.88, "text": " Well, then I say I don't believe a word of it, show me."}, {"start": 108.88, "end": 111.84, "text": " And the person does exactly that."}, {"start": 111.84, "end": 116.72, "text": " Using the repulsive force algorithm, we can find a way to untangle them."}, {"start": 116.72, "end": 119.12, "text": " This seems like black magic."}, {"start": 119.12, "end": 125.12, "text": " Also, with some adjustments to this process, and running it backwards in time,"}, {"start": 125.12, "end": 131.44, "text": " we can even start with a piece of geometry and compute an ideal shrink wrap for it."}, {"start": 131.44, "end": 137.76, "text": " Even better when applied to point clouds, we can even create a full 3D geometry"}, {"start": 137.76, "end": 140.24, "text": " that represents these points."}, {"start": 140.24, "end": 141.84, "text": " And get this."}, {"start": 141.84, "end": 146.56, "text": " It works even if the point clouds are incomplete."}, {"start": 146.56, "end": 153.04000000000002, "text": " Now, make no mistake, the topic of point cloud reconstruction has been studied for decades now."}, {"start": 153.04000000000002, "end": 160.64000000000001, "text": " So much so that during my PhD years, I attended to well over 100 paper talks on this topic."}, {"start": 160.64000000000001, "end": 165.28, "text": " And I am by no means an expert on this, not even close,"}, {"start": 165.28, "end": 170.08, "text": " but I can tell you this looks like a pretty good solution."}, {"start": 170.08, "end": 176.56, "text": " And these applications emerge only as a tiny side effect of this algorithm."}, {"start": 176.56, "end": 179.68, "text": " But it's not over, there is more."}, {"start": 179.68, "end": 187.36, "text": " It can even fix bad mesh geometries, something that artists encounter in the world all the time."}, {"start": 187.36, "end": 188.64000000000001, "text": " Loving it."}, {"start": 188.64000000000001, "end": 194.96, "text": " And it can also create cool and crazy surfaces for your art installations."}, {"start": 194.96, "end": 199.68, "text": " Now, so far we have applied repulsion to surfaces."}, {"start": 199.68, "end": 204.0, "text": " But this concept can also be applied to curves as well,"}, {"start": 204.0, "end": 208.24, "text": " which opens up a whole new world of really cool applications."}, {"start": 208.24, "end": 210.64000000000001, "text": " Let's start with the most important one."}, {"start": 210.64000000000001, "end": 214.8, "text": " Do you know what happens when you put your wired earbuds into your pocket?"}, {"start": 215.76000000000002, "end": 217.52, "text": " Yes, of course."}, {"start": 217.52, "end": 221.04000000000002, "text": " Exactly, this happens every time, right?"}, {"start": 221.68, "end": 227.84, "text": " And now, hold on to your papers, because it turns out it can even untangle your headphones"}, {"start": 227.84, "end": 230.16, "text": " without breaking them."}, {"start": 230.16, "end": 235.84, "text": " Now, wait a second, if it can generate curves that don't intersect,"}, {"start": 235.84, "end": 242.16, "text": " maybe it could also be used for path planning for characters in a virtual world."}, {"start": 242.16, "end": 244.48000000000002, "text": " Look, these folks intersect."}, {"start": 245.2, "end": 248.8, "text": " But if they use this new technique for path planning,"}, {"start": 248.8, "end": 250.88, "text": " there should be no intersections."}, {"start": 251.6, "end": 256.16, "text": " And yes, no intersections, thereby no collisions."}, {"start": 256.16, "end": 257.28000000000003, "text": " Excellent."}, {"start": 258.08000000000004, "end": 264.08000000000004, "text": " And if we feel like it, we can also start with a small piece of noodle inside a bunny"}, {"start": 264.08000000000004, "end": 265.52000000000004, "text": " and start growing it."}, {"start": 266.24, "end": 266.72, "text": " Why not?"}, {"start": 267.28000000000003, "end": 273.04, "text": " And then we can marvel at how over time it starts to look like in testings."}, {"start": 273.04, "end": 277.68, "text": " And it just still keeps growing and growing without touching the bunny."}, {"start": 278.40000000000003, "end": 279.12, "text": " So cool."}, {"start": 280.64000000000004, "end": 282.88, "text": " Now, onto more useful things."}, {"start": 282.88, "end": 290.56, "text": " It can even be used to visualize social media connections and family trees in a compact manner."}, {"start": 290.88, "end": 295.52, "text": " Or it can move muscle fibers around and the list just goes on."}, {"start": 296.32, "end": 299.44, "text": " But you are experienced fellow scholars over here."}, {"start": 299.44, "end": 305.28, "text": " So I hear you asking, well, wait a second, all these sounds quite trivial."}, {"start": 305.84, "end": 310.08, "text": " Just apply repulsive forces to the entirety of the surfaces."}, {"start": 310.08, "end": 315.12, "text": " And off we go. Why does this have to be a paper published at a prestigious conference?"}, {"start": 315.68, "end": 317.84, "text": " Well, actually, let's try this."}, {"start": 319.03999999999996, "end": 322.15999999999997, "text": " If we do that, oh oh, this happens."}, {"start": 323.59999999999997, "end": 324.96, "text": " Or this happens."}, {"start": 325.68, "end": 329.03999999999996, "text": " The algorithm is not so trivial after all."}, {"start": 329.03999999999996, "end": 331.76, "text": " And this is what I love about this paper."}, {"start": 331.76, "end": 334.15999999999997, "text": " It proposes a simple idea."}, {"start": 334.15999999999997, "end": 338.24, "text": " And this idea is simple, but not easy."}, {"start": 338.24, "end": 340.8, "text": " If we think it is easy, this happens."}, {"start": 341.44, "end": 344.24, "text": " But if we do it well, this happens."}, {"start": 344.24, "end": 348.96000000000004, "text": " The paper describes in detail how to perform this so it works properly"}, {"start": 348.96000000000004, "end": 352.72, "text": " and provides a ton of immediate things it can be used to."}, {"start": 353.28000000000003, "end": 354.88, "text": " This is not a product."}, {"start": 354.88, "end": 359.6, "text": " This is an idea and a list of potential applications for it."}, {"start": 359.6, "end": 363.76, "text": " In my opinion, this is basic academic research at its best."}, {"start": 364.32, "end": 365.28000000000003, "text": " Bravo."}, {"start": 365.28000000000003, "end": 366.64, "text": " I love it."}, {"start": 366.64, "end": 368.96, "text": " And what would you use this for?"}, {"start": 368.96, "end": 371.12, "text": " Please let me know in the comments below."}, {"start": 371.12, "end": 373.12, "text": " I'd love to hear your ideas."}, {"start": 373.12, "end": 378.08, "text": " And until then, we will have no more tangled earbuds in our pockets."}, {"start": 378.08, "end": 381.28, "text": " And we can thank Computer Graphics Research for that."}, {"start": 381.28, "end": 383.2, "text": " What a time to be alive."}, {"start": 383.2, "end": 386.71999999999997, "text": " This video has been supported by weights and biases."}, {"start": 386.71999999999997, "end": 391.28, "text": " And being a machine learning researcher means doing tons of experiments"}, {"start": 391.28, "end": 393.91999999999996, "text": " and of course creating tons of data."}, {"start": 393.92, "end": 396.32, "text": " But I am not looking for data."}, {"start": 396.32, "end": 398.40000000000003, "text": " I am looking for insights."}, {"start": 398.40000000000003, "end": 401.92, "text": " And weights and biases helps with exactly that."}, {"start": 401.92, "end": 403.92, "text": " They have tools for experiment tracking,"}, {"start": 403.92, "end": 405.92, "text": " data set and model versioning,"}, {"start": 405.92, "end": 409.36, "text": " and even hyper parameter optimization."}, {"start": 409.36, "end": 413.92, "text": " No wonder this is the experiment tracking tool choice of open AI"}, {"start": 413.92, "end": 415.92, "text": " Toyota Research, Samsung,"}, {"start": 415.92, "end": 418.72, "text": " and many more prestigious labs."}, {"start": 418.72, "end": 423.52000000000004, "text": " Make sure to use the link WNB.ME-slash-paper-paste."}, {"start": 423.52, "end": 427.52, "text": " Or just click the link in the video description."}, {"start": 427.52, "end": 431.52, "text": " And try this 10 minute example of weights and biases today"}, {"start": 431.52, "end": 435.52, "text": " to experience the wonderful feeling of training a neural network"}, {"start": 435.52, "end": 439.52, "text": " and being in control of your experiments."}, {"start": 439.52, "end": 441.52, "text": " After you try it, you won't want to go back."}, {"start": 441.52, "end": 443.52, "text": " Thanks for watching and for your generous support."}, {"start": 443.52, "end": 453.52, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=j8tMk-GE8hY
NVIDIA’s New AI: Wow, Instant Neural Graphics! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 #NVIDIA's paper "Instant Neural Graphics Primitives with a Multiresolution Hash Encoding" (i.e., instant-ngp) is available here: https://nvlabs.github.io/instant-ngp/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #instantnerf
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to do this and this and this. One of these applications is called a Nerf. What is that? Nerfs mean that we have a collection of photos like these and magically create a video where we can fly through these photos. Yes, typically scientists now use some sort of learning based AI method to fill in all this information between these photos. This is something that sounded like science fiction just a couple years ago and now here we are. Now these are mostly learning based methods, therefore these techniques need some training time. Wanna see how their results evolve over time? I surely do, so let's have a look together. This Nerf paper was published about a year and a half or two years ago. We typically have to wait for at least a few hours for something to happen. Then came the Planoxos paper with something that looks like black magic. Yes, that's right, these trains in a matter of minutes. And it was published just two months ago. Such improvement in just two years. But here is Nvidia's new paper from about a month ago. And yes, I hear you asking, Karoi, are you telling me that a two month old paper of this caliber is going to be outperformed by a one month old paper? Yes, that is exactly what I'm saying. Now hold on to your papers and look here, with the new method the training takes. What? Last time then I have to add this sentence because it is already done. So first we wait from hours to days. Then two years later it trains in minutes and a month later just a month later it trains in a couple seconds. Basically, nearly instantly. And if we let it run for a bit longer but still less than two minutes it will not only outperform a naive technique but will even provide better quality results than a previous method while training for about ten times quicker. That is absolutely incredible. I would say that this is swift progress in machine learning research but that word will not cut it here. This is truly something else. But if that wasn't enough, nothing is not the only thing this one can do. It can also approximate a gigapixel image. What is that? That is an image with tons of data in it and the AI is asked to create a cheaper neural representation of this image. And we can just keep zooming in and zooming in and we still find no details there. And if you have been holding on to your paper so far, now squeeze that paper because what you see here is not the result. But the whole training process itself. Really? Yes, really. Did you see it? Well, did you blink? Because if you did, you almost certainly missed it. This was also trained from scratch right in front of our eyes. But it's so quick that if you take just a moment to hold onto your papers a bit more tightly and you already missed it. Once again, a couple of papers before this took several hours at the very least. That is outstanding. And if we were done here, I would already be very happy, but we are not done yet, not even close. It can still do two more amazing things. One, this is a neural sign distance field it has produced. That is a mapping from 3D coordinates in a virtual world to distance to a surface. Essentially, it learns the geometry of the object better because it knows what parts are inside and outside. And it is blazing fast, surprisingly even for objects with detailed geometry. And my favorite, it can also do neural radiance caching. What is that? At the risk of simplifying the problem, essentially, it is learning to perform a light transport simulation. It took me several years of research to be able to produce such a light simulation. So let's see how long it takes for the AI to learn to do this. Well, let's see. Holy matter of papers and video, what are you doing? I give up. As you see, the pace of progress in AI and computer graphics research is absolutely incredible and even better, it is accelerating over time. Things that were wishful thinking 10 years ago became not only possible, but are now easy over the span of just a couple of papers. I am stunned. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.72, "end": 11.68, "text": " Today we are going to do this and this and this."}, {"start": 11.68, "end": 14.96, "text": " One of these applications is called a Nerf."}, {"start": 14.96, "end": 16.56, "text": " What is that?"}, {"start": 16.56, "end": 23.04, "text": " Nerfs mean that we have a collection of photos like these and magically create a video"}, {"start": 23.04, "end": 26.0, "text": " where we can fly through these photos."}, {"start": 26.0, "end": 32.56, "text": " Yes, typically scientists now use some sort of learning based AI method to fill in all"}, {"start": 32.56, "end": 35.84, "text": " this information between these photos."}, {"start": 35.84, "end": 41.519999999999996, "text": " This is something that sounded like science fiction just a couple years ago and now here"}, {"start": 41.519999999999996, "end": 42.519999999999996, "text": " we are."}, {"start": 42.519999999999996, "end": 48.400000000000006, "text": " Now these are mostly learning based methods, therefore these techniques need some training"}, {"start": 48.400000000000006, "end": 49.400000000000006, "text": " time."}, {"start": 49.4, "end": 55.8, "text": " Wanna see how their results evolve over time?"}, {"start": 55.8, "end": 56.6, "text": " I surely do, so let's have a look together."}, {"start": 56.6, "end": 61.92, "text": " This Nerf paper was published about a year and a half or two years ago."}, {"start": 61.92, "end": 67.08, "text": " We typically have to wait for at least a few hours for something to happen."}, {"start": 67.08, "end": 72.28, "text": " Then came the Planoxos paper with something that looks like black magic."}, {"start": 72.28, "end": 77.44, "text": " Yes, that's right, these trains in a matter of minutes."}, {"start": 77.44, "end": 80.8, "text": " And it was published just two months ago."}, {"start": 80.8, "end": 83.88, "text": " Such improvement in just two years."}, {"start": 83.88, "end": 88.84, "text": " But here is Nvidia's new paper from about a month ago."}, {"start": 88.84, "end": 95.2, "text": " And yes, I hear you asking, Karoi, are you telling me that a two month old paper of this"}, {"start": 95.2, "end": 100.2, "text": " caliber is going to be outperformed by a one month old paper?"}, {"start": 100.2, "end": 103.75999999999999, "text": " Yes, that is exactly what I'm saying."}, {"start": 103.76, "end": 110.28, "text": " Now hold on to your papers and look here, with the new method the training takes."}, {"start": 110.28, "end": 111.28, "text": " What?"}, {"start": 111.28, "end": 116.88000000000001, "text": " Last time then I have to add this sentence because it is already done."}, {"start": 116.88000000000001, "end": 122.0, "text": " So first we wait from hours to days."}, {"start": 122.0, "end": 129.92000000000002, "text": " Then two years later it trains in minutes and a month later just a month later it trains"}, {"start": 129.92000000000002, "end": 131.76, "text": " in a couple seconds."}, {"start": 131.76, "end": 134.88, "text": " Basically, nearly instantly."}, {"start": 134.88, "end": 140.6, "text": " And if we let it run for a bit longer but still less than two minutes it will not only"}, {"start": 140.6, "end": 146.88, "text": " outperform a naive technique but will even provide better quality results than a previous"}, {"start": 146.88, "end": 151.88, "text": " method while training for about ten times quicker."}, {"start": 151.88, "end": 154.07999999999998, "text": " That is absolutely incredible."}, {"start": 154.07999999999998, "end": 159.79999999999998, "text": " I would say that this is swift progress in machine learning research but that word will"}, {"start": 159.79999999999998, "end": 161.28, "text": " not cut it here."}, {"start": 161.28, "end": 163.92, "text": " This is truly something else."}, {"start": 163.92, "end": 169.28, "text": " But if that wasn't enough, nothing is not the only thing this one can do."}, {"start": 169.28, "end": 173.44, "text": " It can also approximate a gigapixel image."}, {"start": 173.44, "end": 174.96, "text": " What is that?"}, {"start": 174.96, "end": 181.6, "text": " That is an image with tons of data in it and the AI is asked to create a cheaper neural"}, {"start": 181.6, "end": 184.28, "text": " representation of this image."}, {"start": 184.28, "end": 190.52, "text": " And we can just keep zooming in and zooming in and we still find no details there."}, {"start": 190.52, "end": 196.12, "text": " And if you have been holding on to your paper so far, now squeeze that paper because what"}, {"start": 196.12, "end": 198.84, "text": " you see here is not the result."}, {"start": 198.84, "end": 201.96, "text": " But the whole training process itself."}, {"start": 201.96, "end": 202.96, "text": " Really?"}, {"start": 202.96, "end": 204.76000000000002, "text": " Yes, really."}, {"start": 204.76000000000002, "end": 205.92000000000002, "text": " Did you see it?"}, {"start": 205.92000000000002, "end": 208.32000000000002, "text": " Well, did you blink?"}, {"start": 208.32000000000002, "end": 211.44, "text": " Because if you did, you almost certainly missed it."}, {"start": 211.44, "end": 215.88, "text": " This was also trained from scratch right in front of our eyes."}, {"start": 215.88, "end": 222.88, "text": " But it's so quick that if you take just a moment to hold onto your papers a bit more tightly"}, {"start": 222.88, "end": 224.92, "text": " and you already missed it."}, {"start": 224.92, "end": 230.88, "text": " Once again, a couple of papers before this took several hours at the very least."}, {"start": 230.88, "end": 232.92, "text": " That is outstanding."}, {"start": 232.92, "end": 238.76, "text": " And if we were done here, I would already be very happy, but we are not done yet, not"}, {"start": 238.76, "end": 239.76, "text": " even close."}, {"start": 239.76, "end": 243.44, "text": " It can still do two more amazing things."}, {"start": 243.44, "end": 248.07999999999998, "text": " One, this is a neural sign distance field it has produced."}, {"start": 248.07999999999998, "end": 254.0, "text": " That is a mapping from 3D coordinates in a virtual world to distance to a surface."}, {"start": 254.0, "end": 260.76, "text": " Essentially, it learns the geometry of the object better because it knows what parts are inside"}, {"start": 260.76, "end": 262.88, "text": " and outside."}, {"start": 262.88, "end": 269.04, "text": " And it is blazing fast, surprisingly even for objects with detailed geometry."}, {"start": 269.04, "end": 273.92, "text": " And my favorite, it can also do neural radiance caching."}, {"start": 273.92, "end": 274.92, "text": " What is that?"}, {"start": 274.92, "end": 280.76000000000005, "text": " At the risk of simplifying the problem, essentially, it is learning to perform a light transport"}, {"start": 280.76000000000005, "end": 281.76000000000005, "text": " simulation."}, {"start": 281.76000000000005, "end": 287.64000000000004, "text": " It took me several years of research to be able to produce such a light simulation."}, {"start": 287.64000000000004, "end": 292.36, "text": " So let's see how long it takes for the AI to learn to do this."}, {"start": 292.36, "end": 295.76, "text": " Well, let's see."}, {"start": 295.76, "end": 300.24, "text": " Holy matter of papers and video, what are you doing?"}, {"start": 300.24, "end": 301.44, "text": " I give up."}, {"start": 301.44, "end": 307.64, "text": " As you see, the pace of progress in AI and computer graphics research is absolutely incredible"}, {"start": 307.64, "end": 312.44, "text": " and even better, it is accelerating over time."}, {"start": 312.44, "end": 318.64, "text": " Things that were wishful thinking 10 years ago became not only possible, but are now"}, {"start": 318.64, "end": 322.56, "text": " easy over the span of just a couple of papers."}, {"start": 322.56, "end": 324.56, "text": " I am stunned."}, {"start": 324.56, "end": 326.36, "text": " What a time to be alive."}, {"start": 326.36, "end": 329.8, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 329.8, "end": 335.76, "text": " If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 335.76, "end": 343.68, "text": " They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your"}, {"start": 343.68, "end": 350.16, "text": " papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 350.16, "end": 355.68, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 355.68, "end": 362.08000000000004, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 362.08000000000004, "end": 363.88000000000005, "text": " workstations or servers."}, {"start": 363.88000000000005, "end": 369.32000000000005, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 369.32000000000005, "end": 370.64000000000004, "text": " instances today."}, {"start": 370.64000000000004, "end": 375.36, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 375.36, "end": 376.36, "text": " for you."}, {"start": 376.36, "end": 380.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eaSTGOgO-ss
NVIDIA’s New AI: Superb Details, Super Fast! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here (thank you Soumik Rakshit!): https://wandb.ai/geekyrakshit/poegan/reports/PoE-GAN-Generating-Images-from-Multi-Modal-Inputs--VmlldzoxNTA5MzUx 📝 The paper "Multimodal Conditional Image Synthesis with Product-of-Experts GANs" (#PoEGAN) is available here: https://deepimagination.cc/PoE-GAN/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. Today we are going to look at Nvidia's spectacular new AI that can generate beautiful images for us. But this is not image generation of any kind. No, no, this is different. Let's see how it is different. For instance, we can write a description and the appropriate image comes out. Snowy mountains, pink, cloudy sky, checkmark. Okay, so we can give it our direction. Make no mistake, this is fantastic, but this has been done before, so nothing new here. Yet, or we can create a segmentation map, this tells the AI what things are. The sea is down there, mountains and sky up here. Looking great, but this has been done before too. For instance, Nvidia's previous Gauguin paper could do this too. Nothing new here. Yet, or we can tell the AI where things are by sketching. This one also works, but this has been done too. So, is there nothing new in this paper? Well, of course there is. And now, hold onto your papers and watch as we fuse all of these descriptions together. We tell it where things are and what things are. But I wonder if we can make this mountain snowy and a pink cloudy sky on top of all things. Yes, we can. Oh wow, I love it. The sea could have a little more detail, but the rest of the image is spectacular. So, with this new technique, we can tell the AI where things are, what things are, and on top of it, we can also give it our direction. And here's the key, all of these can be done in any combination. Now, did you see it? Curiously, an image popped up at the start of the video when we unchecked all the boxes. Why is that? Is that a bug? I'll tell you in a bit what that is. So far, these were great examples, but let's try to push this to its limits and see what it can do. For instance, how quickly can we iterate with this? How quick is it to correct mistakes or improve the work? Oh boy, super quick. When giving our direction to the AI, we can update the text and the output refreshes almost as quickly as we can type. The sketch to image feature is also a great tool by itself. Of course, there is not only one way. There are many pictures that this could describe. So, how do we control what we get? Well, it can even generate variants for us. With this new work, we can even draw a piece of rock within the sea and the rock will indeed appear. And not only that, but it understands that the waves have to go around it too. An understanding of physics, that is insanity. My goodness. Or better, if we know in advance that we are looking for tall trees and autumn leaves, we can even start with the art direction. And then, when we add our labels, they will be satisfied. We can have our river, okay, but the trees and the leaves will always be there. Finally, we can sketch on top of this to have additional control over the hills and clouds. And get this, we can even edit real images. So, how does this black magic work? Well, we have four neural networks, four experts, if you will. And the new technique describes how to fuse their expertise together into one amazing package. And the result is a technique that outperforms previous techniques in most of the tested cases. So, are these some ancient methods from many years ago that are outperformed or are they cutting edge? And here comes the best part. If you have been holding onto your paper so far, now squeeze that paper because these techniques are not some ancient methods, not at all. Both of these methods are from the same year as this technique, the same year. Such improvements in last year. That is outstanding. What a time to be alive. Now, I made a promise to you early in the video, and the promise was explaining this. Yes, the technique generates results even when not given a lot of instruction. Yes, this was the early example when we unchecked all the boxes. What quality images can we expect then? Well, here are some uncurated examples. This means that the authors did not cherry pick here, just dumped a bunch of results here. And, oh my goodness, these are really good. The details are really there. The resolution of the images could be improved, but we saw with the previous Gaugan paper that this can improve a great deal in just a couple years. Or with this paper in less than a year. I would like to send a huge congratulations to the scientists at NVIDIA Bravo. And you fellow scholars just saw how much of an improvement we can get just one more paper down the line. So, just imagine what the next paper could bring. If you are curious, make sure to subscribe and hit the bell icon to not miss it when the follow-up paper appears on two-minute papers. What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that Wades and Biasis is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get the free demo today. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 11.6, "text": " Today we are going to look at Nvidia's spectacular new AI that can generate beautiful images for us."}, {"start": 12.32, "end": 21.04, "text": " But this is not image generation of any kind. No, no, this is different. Let's see how it is different."}, {"start": 21.04, "end": 28.16, "text": " For instance, we can write a description and the appropriate image comes out. Snowy mountains,"}, {"start": 28.16, "end": 37.44, "text": " pink, cloudy sky, checkmark. Okay, so we can give it our direction. Make no mistake, this is fantastic,"}, {"start": 37.44, "end": 45.04, "text": " but this has been done before, so nothing new here. Yet, or we can create a segmentation map,"}, {"start": 45.04, "end": 52.08, "text": " this tells the AI what things are. The sea is down there, mountains and sky up here."}, {"start": 52.08, "end": 60.08, "text": " Looking great, but this has been done before too. For instance, Nvidia's previous Gauguin paper could"}, {"start": 60.08, "end": 68.16, "text": " do this too. Nothing new here. Yet, or we can tell the AI where things are by sketching."}, {"start": 68.16, "end": 75.28, "text": " This one also works, but this has been done too. So, is there nothing new in this paper?"}, {"start": 75.28, "end": 82.8, "text": " Well, of course there is. And now, hold onto your papers and watch as we fuse all of these"}, {"start": 82.8, "end": 90.32000000000001, "text": " descriptions together. We tell it where things are and what things are. But I wonder if we can"}, {"start": 90.32000000000001, "end": 98.64, "text": " make this mountain snowy and a pink cloudy sky on top of all things. Yes, we can. Oh wow,"}, {"start": 98.64, "end": 105.36, "text": " I love it. The sea could have a little more detail, but the rest of the image is spectacular."}, {"start": 106.0, "end": 113.12, "text": " So, with this new technique, we can tell the AI where things are, what things are, and on top of"}, {"start": 113.12, "end": 120.08, "text": " it, we can also give it our direction. And here's the key, all of these can be done in any combination."}, {"start": 121.44, "end": 128.4, "text": " Now, did you see it? Curiously, an image popped up at the start of the video when we unchecked"}, {"start": 128.4, "end": 136.4, "text": " all the boxes. Why is that? Is that a bug? I'll tell you in a bit what that is. So far, these were"}, {"start": 136.4, "end": 142.64000000000001, "text": " great examples, but let's try to push this to its limits and see what it can do. For instance,"}, {"start": 142.64000000000001, "end": 149.36, "text": " how quickly can we iterate with this? How quick is it to correct mistakes or improve the work?"}, {"start": 150.32, "end": 157.84, "text": " Oh boy, super quick. When giving our direction to the AI, we can update the text and the output"}, {"start": 157.84, "end": 164.96, "text": " refreshes almost as quickly as we can type. The sketch to image feature is also a great tool by"}, {"start": 164.96, "end": 171.76, "text": " itself. Of course, there is not only one way. There are many pictures that this could describe."}, {"start": 171.76, "end": 177.84, "text": " So, how do we control what we get? Well, it can even generate variants for us."}, {"start": 178.88, "end": 185.2, "text": " With this new work, we can even draw a piece of rock within the sea and the rock will indeed"}, {"start": 185.2, "end": 192.0, "text": " appear. And not only that, but it understands that the waves have to go around it too."}, {"start": 192.72, "end": 201.35999999999999, "text": " An understanding of physics, that is insanity. My goodness. Or better, if we know in advance"}, {"start": 201.35999999999999, "end": 207.67999999999998, "text": " that we are looking for tall trees and autumn leaves, we can even start with the art direction."}, {"start": 207.68, "end": 215.84, "text": " And then, when we add our labels, they will be satisfied. We can have our river, okay, but the trees"}, {"start": 215.84, "end": 223.04000000000002, "text": " and the leaves will always be there. Finally, we can sketch on top of this to have additional"}, {"start": 223.04000000000002, "end": 232.16, "text": " control over the hills and clouds. And get this, we can even edit real images. So, how does this"}, {"start": 232.16, "end": 239.6, "text": " black magic work? Well, we have four neural networks, four experts, if you will. And the new"}, {"start": 239.6, "end": 246.8, "text": " technique describes how to fuse their expertise together into one amazing package. And the result"}, {"start": 246.8, "end": 253.12, "text": " is a technique that outperforms previous techniques in most of the tested cases. So,"}, {"start": 253.12, "end": 260.15999999999997, "text": " are these some ancient methods from many years ago that are outperformed or are they cutting edge?"}, {"start": 260.16, "end": 266.72, "text": " And here comes the best part. If you have been holding onto your paper so far, now squeeze that"}, {"start": 266.72, "end": 273.20000000000005, "text": " paper because these techniques are not some ancient methods, not at all. Both of these methods"}, {"start": 273.20000000000005, "end": 281.68, "text": " are from the same year as this technique, the same year. Such improvements in last year. That is"}, {"start": 281.68, "end": 288.40000000000003, "text": " outstanding. What a time to be alive. Now, I made a promise to you early in the video, and the"}, {"start": 288.4, "end": 295.03999999999996, "text": " promise was explaining this. Yes, the technique generates results even when not given a lot of"}, {"start": 295.03999999999996, "end": 302.23999999999995, "text": " instruction. Yes, this was the early example when we unchecked all the boxes. What quality images"}, {"start": 302.23999999999995, "end": 308.79999999999995, "text": " can we expect then? Well, here are some uncurated examples. This means that the authors did not"}, {"start": 308.79999999999995, "end": 316.08, "text": " cherry pick here, just dumped a bunch of results here. And, oh my goodness, these are really good."}, {"start": 316.08, "end": 322.24, "text": " The details are really there. The resolution of the images could be improved, but we saw with"}, {"start": 322.24, "end": 328.96, "text": " the previous Gaugan paper that this can improve a great deal in just a couple years. Or with this"}, {"start": 328.96, "end": 335.44, "text": " paper in less than a year. I would like to send a huge congratulations to the scientists at NVIDIA"}, {"start": 335.44, "end": 342.08, "text": " Bravo. And you fellow scholars just saw how much of an improvement we can get just one more"}, {"start": 342.08, "end": 348.64, "text": " paper down the line. So, just imagine what the next paper could bring. If you are curious,"}, {"start": 348.64, "end": 354.24, "text": " make sure to subscribe and hit the bell icon to not miss it when the follow-up paper appears on"}, {"start": 354.24, "end": 360.15999999999997, "text": " two-minute papers. What you see here is a report of this exact paper we have talked about, which was"}, {"start": 360.15999999999997, "end": 365.84, "text": " made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think"}, {"start": 365.84, "end": 371.84, "text": " it helps you understand this paper better. Wades and Biasis provides tools to track your experiments"}, {"start": 371.84, "end": 376.88, "text": " in your deep learning projects. Their system is designed to save you a ton of time and money,"}, {"start": 376.88, "end": 383.35999999999996, "text": " and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub,"}, {"start": 383.35999999999996, "end": 388.55999999999995, "text": " and more. And the best part is that Wades and Biasis is free for all individuals,"}, {"start": 388.55999999999995, "end": 394.71999999999997, "text": " academics, and open source projects. It really is as good as it gets. Make sure to visit them"}, {"start": 394.71999999999997, "end": 401.03999999999996, "text": " through wnb.com slash papers or just click the link in the video description and you can get"}, {"start": 401.04, "end": 405.52000000000004, "text": " the free demo today. Our thanks to Wades and Biasis for their long-standing support"}, {"start": 405.52000000000004, "end": 410.48, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 410.48, "end": 438.64000000000004, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=DTqcPEhSHB8
Adobe's New Method: Stunning Creatures... Even Cheaper! 👾
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Tessellation-Free Displacement Mapping for Ray Tracing" is available here: https://research.adobe.com/publication/tessellation-free-displacement-mapping-for-ray-tracing/ https://perso.telecom-paristech.fr/boubek/papers/TFDM/ 📝 Our previous paper with the planet scene: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Today we are going to create a virtual world with breathtakingly detailed pieces of geometry. These are just some of the results of the new technique and boy, are they amazing? I wonder how expensive this will get? We'll see about that in a moment. Now here we wish to create a virtual world and we want this world to be convincing. Therefore we will need tons of high resolution geometry like the ones you see from these previous episodes. But all these details in the geometry means tons of data that has to be stored somewhere. Traditional techniques allow us to crank up the detail in the geometry, but there is a price to be paid for it. And the price is that we need to throw more memory and computation at the problem. The more we do, the higher the resolution of the geometry that we get. However, here comes the problem. Ouch! Look! At higher resolution levels, yes! The price to be paid is getting a little steep. Hundreds of megabytes of memory is quite steep for just one object, but wait, it gets worse. Oh, come on! Gigabytes, that is a little too much. Imagine a scene with hundreds of these objects laying around. No graphics card has enough memory to do that. But this new technique helps us add small bumps and ridges to these objects much more efficiently than previous techniques. This opens up the possibility for creating breathtakingly detailed digital objects where even the tiniest imperfections on our armor can be seen. Okay, that is wonderful. But let's compare how previous techniques can deal with this problem and see if this new one is any better. Here is a traditional technique at its best when using two gigabytes of memory. That is quite expensive. And here is the new technique. Hmm, look at that. This looks even better. Surely this means that it needs even more memory. How many gigabytes does this need? Or, if not gigabytes, how many hundreds of megabytes? What do you think? Please let me know in the comments section below. I'll wait. Thank you. I am super excited to see your guesses. Now, hold on to your papers because actually it's not gigabytes. Not even hundreds of megabytes. No, no. It is 34 megabytes. That is a pittance for a piece of geometry of this quality. This is insanity. But wait, it gets better. We can even dramatically change the displacements on our models. Here is the workflow. We plug this geometry into a light simulation program. And we can change the geometry, the lighting, material properties, or even clone our object many, many times. And it still runs interactively. Now, what about the noise in these images? This is a light simulation technique called past tracing. And the noise that you see here slowly clears up over time as we simulate the path of many more millions of light rays. If we let it run for long enough, we end up with a nearly perfect piece of geometry. And I wonder how the traditional technique is able to perform if it is also given the same memory allowance. Well, let's see. Wow, there is no contest here. Even for other cases, it is not uncommon that the new technique can create an equivalent or even better geometry quality, but use 50 times less memory to do that. So we might get more detailed virtual worlds for cheaper. Sign me up right now. What a time to be alive. And one more thing. When checking out the talk for the paper, I saw this. Wow! 20 views. 20 people have seen this talk. Now, I always say that views are of course not everything, but I am worried that if we don't talk about it here on too many papers, almost no one will talk about it. And these works are so good. People have to know. And if you agree, please spread the word on these papers and show them to people, you know. This episode has been supported by Coheer AI. Coheer builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping. Or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Coheer.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 5.0, "end": 11.5, "text": " Today we are going to create a virtual world with breathtakingly detailed pieces of geometry."}, {"start": 11.5, "end": 17.5, "text": " These are just some of the results of the new technique and boy, are they amazing?"}, {"start": 17.5, "end": 22.5, "text": " I wonder how expensive this will get? We'll see about that in a moment."}, {"start": 22.5, "end": 29.5, "text": " Now here we wish to create a virtual world and we want this world to be convincing."}, {"start": 29.5, "end": 36.5, "text": " Therefore we will need tons of high resolution geometry like the ones you see from these previous episodes."}, {"start": 36.5, "end": 43.5, "text": " But all these details in the geometry means tons of data that has to be stored somewhere."}, {"start": 43.5, "end": 48.5, "text": " Traditional techniques allow us to crank up the detail in the geometry,"}, {"start": 48.5, "end": 57.5, "text": " but there is a price to be paid for it. And the price is that we need to throw more memory and computation at the problem."}, {"start": 57.5, "end": 62.5, "text": " The more we do, the higher the resolution of the geometry that we get."}, {"start": 62.5, "end": 65.5, "text": " However, here comes the problem."}, {"start": 65.5, "end": 73.5, "text": " Ouch! Look! At higher resolution levels, yes! The price to be paid is getting a little steep."}, {"start": 73.5, "end": 80.5, "text": " Hundreds of megabytes of memory is quite steep for just one object, but wait, it gets worse."}, {"start": 80.5, "end": 89.5, "text": " Oh, come on! Gigabytes, that is a little too much. Imagine a scene with hundreds of these objects laying around."}, {"start": 89.5, "end": 92.5, "text": " No graphics card has enough memory to do that."}, {"start": 92.5, "end": 101.5, "text": " But this new technique helps us add small bumps and ridges to these objects much more efficiently than previous techniques."}, {"start": 101.5, "end": 107.5, "text": " This opens up the possibility for creating breathtakingly detailed digital objects"}, {"start": 107.5, "end": 112.5, "text": " where even the tiniest imperfections on our armor can be seen."}, {"start": 112.5, "end": 122.5, "text": " Okay, that is wonderful. But let's compare how previous techniques can deal with this problem and see if this new one is any better."}, {"start": 122.5, "end": 128.5, "text": " Here is a traditional technique at its best when using two gigabytes of memory."}, {"start": 128.5, "end": 133.5, "text": " That is quite expensive. And here is the new technique."}, {"start": 133.5, "end": 141.5, "text": " Hmm, look at that. This looks even better. Surely this means that it needs even more memory."}, {"start": 141.5, "end": 148.5, "text": " How many gigabytes does this need? Or, if not gigabytes, how many hundreds of megabytes?"}, {"start": 148.5, "end": 153.5, "text": " What do you think? Please let me know in the comments section below. I'll wait."}, {"start": 153.5, "end": 157.5, "text": " Thank you. I am super excited to see your guesses."}, {"start": 157.5, "end": 162.5, "text": " Now, hold on to your papers because actually it's not gigabytes."}, {"start": 162.5, "end": 168.5, "text": " Not even hundreds of megabytes. No, no. It is 34 megabytes."}, {"start": 168.5, "end": 175.5, "text": " That is a pittance for a piece of geometry of this quality. This is insanity."}, {"start": 175.5, "end": 181.5, "text": " But wait, it gets better. We can even dramatically change the displacements on our models."}, {"start": 181.5, "end": 187.5, "text": " Here is the workflow. We plug this geometry into a light simulation program."}, {"start": 187.5, "end": 195.5, "text": " And we can change the geometry, the lighting, material properties, or even clone our object many, many times."}, {"start": 195.5, "end": 199.5, "text": " And it still runs interactively."}, {"start": 199.5, "end": 205.5, "text": " Now, what about the noise in these images? This is a light simulation technique called past tracing."}, {"start": 205.5, "end": 213.5, "text": " And the noise that you see here slowly clears up over time as we simulate the path of many more millions of light rays."}, {"start": 213.5, "end": 219.5, "text": " If we let it run for long enough, we end up with a nearly perfect piece of geometry."}, {"start": 219.5, "end": 227.5, "text": " And I wonder how the traditional technique is able to perform if it is also given the same memory allowance."}, {"start": 227.5, "end": 231.5, "text": " Well, let's see."}, {"start": 231.5, "end": 247.5, "text": " Wow, there is no contest here. Even for other cases, it is not uncommon that the new technique can create an equivalent or even better geometry quality, but use 50 times less memory to do that."}, {"start": 247.5, "end": 256.5, "text": " So we might get more detailed virtual worlds for cheaper. Sign me up right now. What a time to be alive."}, {"start": 256.5, "end": 262.5, "text": " And one more thing. When checking out the talk for the paper, I saw this."}, {"start": 262.5, "end": 268.5, "text": " Wow! 20 views. 20 people have seen this talk."}, {"start": 268.5, "end": 279.5, "text": " Now, I always say that views are of course not everything, but I am worried that if we don't talk about it here on too many papers, almost no one will talk about it."}, {"start": 279.5, "end": 289.5, "text": " And these works are so good. People have to know. And if you agree, please spread the word on these papers and show them to people, you know."}, {"start": 289.5, "end": 306.5, "text": " This episode has been supported by Coheer AI. Coheer builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code."}, {"start": 306.5, "end": 320.5, "text": " You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated."}, {"start": 320.5, "end": 336.5, "text": " For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping. Or it can be used to generate a list of possible sentences you can use for your product descriptions."}, {"start": 336.5, "end": 346.5, "text": " Make sure to go to Coheer.ai slash papers or click the link in the video description and give it a try today. It's super easy to use."}, {"start": 346.5, "end": 350.5, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ItKi3h7IY2o
OpenAI GLIDE AI: Astounding Power! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "OpenAI GLIDE: Astounding Power, Now Even Cheaper!" is available here: https://github.com/openai/glide-text2im https://arxiv.org/abs/2112.10741 Try it here. Note that this seems to be a reduced model compared to the one in the paper (quite a bit!). Leave a comment with your results if you have found something cool! https://github.com/openai/glide-text2im 📝 Our material synthesis paper with the fluids: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/ 🕊️ My twitter: https://twitter.com/twominutepapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to play with a magical AI where we just sit in our armchair, see the words, and it draws an image for us. Almost anything we can imagine. Almost. And before you ask, yes, this includes drawing Corgis II. In the last few years, open AI set out to train an AI named GPT-3 that could finish your sentences. Then, they made image GPT. This could even finish your images. Yes, matkitting. It could identify that the cat here likely holds a piece of paper and finish the picture accordingly and even understood that if we have a droplet here and we see just a portion of the ripples, then this means a splash must be filled in. And it gets better than the invented an AI they call Dolly. This one is insanity. We just tell the AI what image we would like to see and it will draw it. Look, it can create a custom storefront for us. It understands the concept of low polygon rendering, isometric views, clay objects, and more. And that's not all. It could even invent clocks with new shapes when asked. The crazy thing here is that it understands geometry, shapes, and even materials. For instance, look at this white clock here on the blue table. And it not only put it on the table, but it also made sure to generate appropriate glossary reflections that matches the color of the clock. And get this, Dolly was published just about a year ago and OpenAI already has a follow-up paper that they call glide. And believe it or not, this can do more and it can do it better. Well, I will believe it when I see it, so let's go. Now, hold on to your papers and let's start with a Hedgehog using a calculator. Wow, that looks incredible. It's not just a Hedgehog, plus a calculator, it really is using the calculator. Now paint a fox in the style of the storey-night painting. I love the style and even the framing of the picture is quite good. There is even some space left to make sure that we see that storey-night. Great decision-making. Now a corgi with a red bow tie and a purple party hat. Excellent. And a pixel art corgi with a pizza. These are really good, but they are nothing compared to what is to come, because it can also perform conditional in painting with text. Yes, I am not kidding, have a look at this little girl hugging a dog. But there is a problem with this. Do you know what the problem is? Of course, the problem is that this is not a corgi. Now it is. That is another great result. And if we wish that some zebras were added here, that's possible too. And we can also add a vase here. Look at that. It even understood that this is a glass table and added its own reflection. Now, I am a light transport researcher by trade, and this makes me very, very happy. However, it is also true that it seems to have changed the material properties of the table. It is now much more diffused than it was before. Perhaps this is the AI's understanding of a new object blocking reflections. It's not perfect by any means, but it is a solid step forward. We can also give this gentleman a white hat. And as I look through these results, I find it absolutely amazing how well the hat blends into the scene. That is very challenging. Why? Well, in light transport research, we need to simulate the path of millions and millions of light rays to make sure that indirect illumination appears in a scene. For instance, look here. This is one of our previous papers that showcases how fluids of different colors paint their diffused surroundings to their own color. I find it absolutely beautiful. Now, let's switch the fluid to a different one. And yes, you see the difference. The link to this work is available in the video description below. And you see, simulating these effects is very costly and very difficult. But this is how proper light transport simulations need to be done. And this glide AI can put no objects into a scene and make them blend in so well, this to me also seems a proper understanding of light transport. I can hardly believe what is going on here. Bravo. And wait, how do we know if this is really better than Dolly? Are we supposed to just believe it? No. Not at all. Fortunately, comparing the results against Dolly is very easy. Look, we just add the same prompts and see that there is no contest. The no-glide technique creates sharper, higher resolution images with more detail and it even follows our instructions better. The paper also showcases a user study where human evaluators also favored the new technique. Now, of course, we are not done here, not even this technique is perfect. Look, we can request a cat with eight legs and wait a minute. It tried some multiplication trick, but we are not falling for it. A plus for effort, little AI, but of course, this is clearly one of the failure cases. And once again, this is a new AI where a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming or technical knowledge. If a computer is a bicycle for the mind, then open AI's glide is a fighter jet. Absolutely incredible. Soon, this might democratize creating paintings and maybe even help inventing new things. And here comes the best part. You can try it too. The notebook for it is available in the video description. Make sure to leave your experiment results in the comments or just read them at me. I'd love to see what you ingenious fellow scholars bring out of this wonderful AI. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 10.84, "text": " Today, we are going to play with a magical AI where we just sit in our armchair,"}, {"start": 10.84, "end": 14.56, "text": " see the words, and it draws an image for us."}, {"start": 14.56, "end": 17.2, "text": " Almost anything we can imagine."}, {"start": 17.2, "end": 18.2, "text": " Almost."}, {"start": 18.2, "end": 23.48, "text": " And before you ask, yes, this includes drawing Corgis II."}, {"start": 23.48, "end": 31.04, "text": " In the last few years, open AI set out to train an AI named GPT-3 that could finish"}, {"start": 31.04, "end": 32.24, "text": " your sentences."}, {"start": 32.24, "end": 35.32, "text": " Then, they made image GPT."}, {"start": 35.32, "end": 38.24, "text": " This could even finish your images."}, {"start": 38.24, "end": 40.44, "text": " Yes, matkitting."}, {"start": 40.44, "end": 47.04, "text": " It could identify that the cat here likely holds a piece of paper and finish the picture"}, {"start": 47.04, "end": 54.6, "text": " accordingly and even understood that if we have a droplet here and we see just a portion"}, {"start": 54.6, "end": 59.56, "text": " of the ripples, then this means a splash must be filled in."}, {"start": 59.56, "end": 64.96000000000001, "text": " And it gets better than the invented an AI they call Dolly."}, {"start": 64.96000000000001, "end": 67.0, "text": " This one is insanity."}, {"start": 67.0, "end": 72.32, "text": " We just tell the AI what image we would like to see and it will draw it."}, {"start": 72.32, "end": 76.96000000000001, "text": " Look, it can create a custom storefront for us."}, {"start": 76.96, "end": 84.91999999999999, "text": " It understands the concept of low polygon rendering, isometric views, clay objects, and more."}, {"start": 84.91999999999999, "end": 86.39999999999999, "text": " And that's not all."}, {"start": 86.39999999999999, "end": 90.63999999999999, "text": " It could even invent clocks with new shapes when asked."}, {"start": 90.63999999999999, "end": 97.63999999999999, "text": " The crazy thing here is that it understands geometry, shapes, and even materials."}, {"start": 97.63999999999999, "end": 101.47999999999999, "text": " For instance, look at this white clock here on the blue table."}, {"start": 101.48, "end": 107.24000000000001, "text": " And it not only put it on the table, but it also made sure to generate appropriate"}, {"start": 107.24000000000001, "end": 111.68, "text": " glossary reflections that matches the color of the clock."}, {"start": 111.68, "end": 119.2, "text": " And get this, Dolly was published just about a year ago and OpenAI already has a follow-up"}, {"start": 119.2, "end": 122.32000000000001, "text": " paper that they call glide."}, {"start": 122.32000000000001, "end": 127.32000000000001, "text": " And believe it or not, this can do more and it can do it better."}, {"start": 127.32000000000001, "end": 131.44, "text": " Well, I will believe it when I see it, so let's go."}, {"start": 131.44, "end": 138.0, "text": " Now, hold on to your papers and let's start with a Hedgehog using a calculator."}, {"start": 138.0, "end": 141.04, "text": " Wow, that looks incredible."}, {"start": 141.04, "end": 147.88, "text": " It's not just a Hedgehog, plus a calculator, it really is using the calculator."}, {"start": 147.88, "end": 152.44, "text": " Now paint a fox in the style of the storey-night painting."}, {"start": 152.44, "end": 157.32, "text": " I love the style and even the framing of the picture is quite good."}, {"start": 157.32, "end": 162.12, "text": " There is even some space left to make sure that we see that storey-night."}, {"start": 162.12, "end": 164.0, "text": " Great decision-making."}, {"start": 164.0, "end": 169.04, "text": " Now a corgi with a red bow tie and a purple party hat."}, {"start": 169.04, "end": 170.04, "text": " Excellent."}, {"start": 170.04, "end": 173.72, "text": " And a pixel art corgi with a pizza."}, {"start": 173.72, "end": 179.24, "text": " These are really good, but they are nothing compared to what is to come, because it can"}, {"start": 179.24, "end": 183.4, "text": " also perform conditional in painting with text."}, {"start": 183.4, "end": 189.48000000000002, "text": " Yes, I am not kidding, have a look at this little girl hugging a dog."}, {"start": 189.48000000000002, "end": 192.20000000000002, "text": " But there is a problem with this."}, {"start": 192.20000000000002, "end": 194.20000000000002, "text": " Do you know what the problem is?"}, {"start": 194.20000000000002, "end": 198.4, "text": " Of course, the problem is that this is not a corgi."}, {"start": 198.4, "end": 200.20000000000002, "text": " Now it is."}, {"start": 200.20000000000002, "end": 202.88, "text": " That is another great result."}, {"start": 202.88, "end": 208.32, "text": " And if we wish that some zebras were added here, that's possible too."}, {"start": 208.32, "end": 211.4, "text": " And we can also add a vase here."}, {"start": 211.4, "end": 213.68, "text": " Look at that."}, {"start": 213.68, "end": 219.4, "text": " It even understood that this is a glass table and added its own reflection."}, {"start": 219.4, "end": 225.36, "text": " Now, I am a light transport researcher by trade, and this makes me very, very happy."}, {"start": 225.36, "end": 230.6, "text": " However, it is also true that it seems to have changed the material properties of the"}, {"start": 230.6, "end": 231.6, "text": " table."}, {"start": 231.6, "end": 235.04000000000002, "text": " It is now much more diffused than it was before."}, {"start": 235.04000000000002, "end": 240.72, "text": " Perhaps this is the AI's understanding of a new object blocking reflections."}, {"start": 240.72, "end": 245.6, "text": " It's not perfect by any means, but it is a solid step forward."}, {"start": 245.6, "end": 248.84, "text": " We can also give this gentleman a white hat."}, {"start": 248.84, "end": 254.88, "text": " And as I look through these results, I find it absolutely amazing how well the hat blends"}, {"start": 254.88, "end": 256.48, "text": " into the scene."}, {"start": 256.48, "end": 258.32, "text": " That is very challenging."}, {"start": 258.32, "end": 259.32, "text": " Why?"}, {"start": 259.32, "end": 264.96, "text": " Well, in light transport research, we need to simulate the path of millions and millions"}, {"start": 264.96, "end": 270.48, "text": " of light rays to make sure that indirect illumination appears in a scene."}, {"start": 270.48, "end": 272.72, "text": " For instance, look here."}, {"start": 272.72, "end": 278.72, "text": " This is one of our previous papers that showcases how fluids of different colors paint their"}, {"start": 278.72, "end": 281.52000000000004, "text": " diffused surroundings to their own color."}, {"start": 281.52000000000004, "end": 284.04, "text": " I find it absolutely beautiful."}, {"start": 284.04, "end": 287.8, "text": " Now, let's switch the fluid to a different one."}, {"start": 287.8, "end": 291.0, "text": " And yes, you see the difference."}, {"start": 291.0, "end": 295.16, "text": " The link to this work is available in the video description below."}, {"start": 295.16, "end": 301.16, "text": " And you see, simulating these effects is very costly and very difficult."}, {"start": 301.16, "end": 305.88000000000005, "text": " But this is how proper light transport simulations need to be done."}, {"start": 305.88000000000005, "end": 312.6, "text": " And this glide AI can put no objects into a scene and make them blend in so well, this"}, {"start": 312.6, "end": 317.32000000000005, "text": " to me also seems a proper understanding of light transport."}, {"start": 317.32000000000005, "end": 320.64000000000004, "text": " I can hardly believe what is going on here."}, {"start": 320.64000000000004, "end": 321.64000000000004, "text": " Bravo."}, {"start": 321.64, "end": 326.12, "text": " And wait, how do we know if this is really better than Dolly?"}, {"start": 326.12, "end": 328.59999999999997, "text": " Are we supposed to just believe it?"}, {"start": 328.59999999999997, "end": 329.59999999999997, "text": " No."}, {"start": 329.59999999999997, "end": 330.59999999999997, "text": " Not at all."}, {"start": 330.59999999999997, "end": 334.76, "text": " Fortunately, comparing the results against Dolly is very easy."}, {"start": 334.76, "end": 340.52, "text": " Look, we just add the same prompts and see that there is no contest."}, {"start": 340.52, "end": 346.64, "text": " The no-glide technique creates sharper, higher resolution images with more detail and it"}, {"start": 346.64, "end": 349.71999999999997, "text": " even follows our instructions better."}, {"start": 349.72, "end": 356.52000000000004, "text": " The paper also showcases a user study where human evaluators also favored the new technique."}, {"start": 356.52000000000004, "end": 361.28000000000003, "text": " Now, of course, we are not done here, not even this technique is perfect."}, {"start": 361.28000000000003, "end": 367.56, "text": " Look, we can request a cat with eight legs and wait a minute."}, {"start": 367.56, "end": 371.88000000000005, "text": " It tried some multiplication trick, but we are not falling for it."}, {"start": 371.88000000000005, "end": 378.64000000000004, "text": " A plus for effort, little AI, but of course, this is clearly one of the failure cases."}, {"start": 378.64, "end": 385.88, "text": " And once again, this is a new AI where a vast body of knowledge lies within, but it only"}, {"start": 385.88, "end": 390.36, "text": " emerges if we can bring it out with properly written prompts."}, {"start": 390.36, "end": 396.2, "text": " It almost feels like a new kind of programming that is open to everyone, even people without"}, {"start": 396.2, "end": 399.08, "text": " any programming or technical knowledge."}, {"start": 399.08, "end": 405.8, "text": " If a computer is a bicycle for the mind, then open AI's glide is a fighter jet."}, {"start": 405.8, "end": 406.8, "text": " Absolutely incredible."}, {"start": 406.8, "end": 414.28000000000003, "text": " Soon, this might democratize creating paintings and maybe even help inventing new things."}, {"start": 414.28000000000003, "end": 416.16, "text": " And here comes the best part."}, {"start": 416.16, "end": 417.76, "text": " You can try it too."}, {"start": 417.76, "end": 421.16, "text": " The notebook for it is available in the video description."}, {"start": 421.16, "end": 426.08000000000004, "text": " Make sure to leave your experiment results in the comments or just read them at me."}, {"start": 426.08000000000004, "end": 432.28000000000003, "text": " I'd love to see what you ingenious fellow scholars bring out of this wonderful AI."}, {"start": 432.28000000000003, "end": 435.72, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 435.72, "end": 441.68, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 441.68, "end": 448.68, "text": " They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances."}, {"start": 448.68, "end": 455.28000000000003, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and"}, {"start": 455.28000000000003, "end": 456.28000000000003, "text": " Azure."}, {"start": 456.28000000000003, "end": 461.6, "text": " Plus they are the only Cloud service with 48GB RTX 8000."}, {"start": 461.6, "end": 468.0, "text": " And researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 468.0, "end": 469.8, "text": " workstations or servers."}, {"start": 469.8, "end": 475.16, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 475.16, "end": 476.52000000000004, "text": " instances today."}, {"start": 476.52000000000004, "end": 481.24, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 481.24, "end": 482.24, "text": " for you."}, {"start": 482.24, "end": 509.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=WAuaCBmHa3U
Can A Goldfish Drive a Car? Yes! But How? 🐠
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "From fish out of water to new insights on navigation mechanisms in animals" is available here: https://www.sciencedirect.com/science/article/abs/pii/S0166432821005994 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail image credit: Matan Samina Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Kato Ijona Ifehir. Today we are going to try to teach a goldfish to drive a car of sorts. Now you may be asking, are we talking about a virtual fish like one of those virtual characters here? No, no, this is a real fish. Yes, really. I am not kidding. This is an experiment where researchers talk a goldfish and put it into this contraption that they call an FOV, a fish-operated vehicle. Very good. I love it. This car is built in a way such that it goes in the direction where the fish is swimming. Now, let's start the experiment by specifying a target and asking the fish to get there and give them a trade if they do. After a few days of training, we get something like this. Wow, it really went there. But wait a second, we are experienced fellow scholars here, so we immediately say that this could happen by chance. Was this a cherry-picked experiment? How do we know that this is real proficiency, not just chance? How do we know if real learning is taking place? Well, we can test that, so let's randomize the starting point and see if it can still get there. The answer is… Yes, yes it can. Absolutely amazing. This is just one example from the many experiments that were done in the paper. So, learning is happening. So much so that over time, a little friend learned so much that when it made a mistake, it could even correct it. Perhaps this means that in a follow-up paper, maybe they can learn to even deal with obstacles in the way. Now, note that this was just a couple of videos. There are many, many more experiments reported in the paper. At this point, we are still not perfectly sure that learning is really taking place here. So, let's run a multi-fish experiment and assess the results. Let's see… Yes, as we let them train for longer, all six of our participants show remarkable improvement in finding the targets. The average amount of time taken is also decreasing rapidly over time. These two seem to be extremely good drivers, perhaps they should be doing this for a living. And if we sum up the performance of every fish in the experiment, we see that they were not too proficient in the first sessions, but after the training, wow, that is a huge improvement. Yes, learning indeed seems to be happening here. So much so that, yes, as a result, I kid you not, they can kind of navigate in the real world too. Now, note that the details of the study were approved by the university and were conducted in accordance with government regulations to make sure that nobody gets hurt or mistreated in the process. So, this was a lot of fun. But what is the insight here? The key insight is that maybe navigation capabilities are universal across species. We don't know for sure, but if it is true, that is an amazing insight. And who knows, a couple papers down the line, if the self-driving car projects don't come to fruition, maybe we will have fish operated Tesla's instead. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances. And hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only cloud service with 48 gigabyte RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Kato Ijona Ifehir."}, {"start": 4.64, "end": 10.48, "text": " Today we are going to try to teach a goldfish to drive a car of sorts."}, {"start": 10.48, "end": 17.04, "text": " Now you may be asking, are we talking about a virtual fish like one of those virtual characters here?"}, {"start": 17.04, "end": 20.0, "text": " No, no, this is a real fish."}, {"start": 20.0, "end": 22.64, "text": " Yes, really. I am not kidding."}, {"start": 22.64, "end": 28.64, "text": " This is an experiment where researchers talk a goldfish and put it into this contraption"}, {"start": 28.64, "end": 32.96, "text": " that they call an FOV, a fish-operated vehicle."}, {"start": 32.96, "end": 34.96, "text": " Very good. I love it."}, {"start": 34.96, "end": 41.6, "text": " This car is built in a way such that it goes in the direction where the fish is swimming."}, {"start": 41.6, "end": 47.28, "text": " Now, let's start the experiment by specifying a target and asking the fish to get there"}, {"start": 47.28, "end": 49.84, "text": " and give them a trade if they do."}, {"start": 49.84, "end": 53.6, "text": " After a few days of training, we get something like this."}, {"start": 54.72, "end": 57.040000000000006, "text": " Wow, it really went there."}, {"start": 57.04, "end": 61.12, "text": " But wait a second, we are experienced fellow scholars here,"}, {"start": 61.12, "end": 65.6, "text": " so we immediately say that this could happen by chance."}, {"start": 65.6, "end": 68.24, "text": " Was this a cherry-picked experiment?"}, {"start": 68.24, "end": 72.8, "text": " How do we know that this is real proficiency, not just chance?"}, {"start": 72.8, "end": 76.24, "text": " How do we know if real learning is taking place?"}, {"start": 76.24, "end": 83.36, "text": " Well, we can test that, so let's randomize the starting point and see if it can still get there."}, {"start": 83.36, "end": 89.36, "text": " The answer is\u2026 Yes, yes it can."}, {"start": 89.36, "end": 96.0, "text": " Absolutely amazing. This is just one example from the many experiments that were done in the paper."}, {"start": 96.0, "end": 103.76, "text": " So, learning is happening. So much so that over time, a little friend learned so much"}, {"start": 103.76, "end": 107.68, "text": " that when it made a mistake, it could even correct it."}, {"start": 107.68, "end": 114.08000000000001, "text": " Perhaps this means that in a follow-up paper, maybe they can learn to even deal with obstacles"}, {"start": 114.08000000000001, "end": 118.4, "text": " in the way. Now, note that this was just a couple of videos."}, {"start": 118.4, "end": 122.88000000000001, "text": " There are many, many more experiments reported in the paper."}, {"start": 122.88000000000001, "end": 128.56, "text": " At this point, we are still not perfectly sure that learning is really taking place here."}, {"start": 128.56, "end": 133.60000000000002, "text": " So, let's run a multi-fish experiment and assess the results."}, {"start": 133.6, "end": 141.51999999999998, "text": " Let's see\u2026 Yes, as we let them train for longer, all six of our participants show remarkable"}, {"start": 141.51999999999998, "end": 148.56, "text": " improvement in finding the targets. The average amount of time taken is also decreasing rapidly"}, {"start": 148.56, "end": 154.79999999999998, "text": " over time. These two seem to be extremely good drivers, perhaps they should be doing this for a"}, {"start": 154.79999999999998, "end": 161.2, "text": " living. And if we sum up the performance of every fish in the experiment, we see that they were"}, {"start": 161.2, "end": 168.32, "text": " not too proficient in the first sessions, but after the training, wow, that is a huge improvement."}, {"start": 169.04, "end": 177.44, "text": " Yes, learning indeed seems to be happening here. So much so that, yes, as a result, I kid you not,"}, {"start": 177.44, "end": 183.6, "text": " they can kind of navigate in the real world too. Now, note that the details of the study were"}, {"start": 183.6, "end": 189.2, "text": " approved by the university and were conducted in accordance with government regulations to make"}, {"start": 189.2, "end": 195.83999999999997, "text": " sure that nobody gets hurt or mistreated in the process. So, this was a lot of fun."}, {"start": 196.32, "end": 203.92, "text": " But what is the insight here? The key insight is that maybe navigation capabilities are universal"}, {"start": 203.92, "end": 210.95999999999998, "text": " across species. We don't know for sure, but if it is true, that is an amazing insight."}, {"start": 210.95999999999998, "end": 217.6, "text": " And who knows, a couple papers down the line, if the self-driving car projects don't come to fruition,"}, {"start": 217.6, "end": 224.24, "text": " maybe we will have fish operated Tesla's instead. What a time to be alive. This episode has been"}, {"start": 224.24, "end": 232.16, "text": " supported by Lambda GPU Cloud. If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU"}, {"start": 232.16, "end": 240.79999999999998, "text": " Cloud. They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances. And hold on to your"}, {"start": 240.8, "end": 248.24, "text": " papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only"}, {"start": 248.24, "end": 256.32, "text": " cloud service with 48 gigabyte RTX 8000. Join researchers at organizations like Apple, MIT and"}, {"start": 256.32, "end": 262.8, "text": " Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com"}, {"start": 262.8, "end": 268.96000000000004, "text": " slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their"}, {"start": 268.96, "end": 274.08, "text": " long-standing support and for helping us make better videos for you. Thanks for watching and"}, {"start": 274.08, "end": 303.91999999999996, "text": " for your generous support. And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dQMvii1KGOY
Wow, Smoke Simulation…Across Space and Time! 💨
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Predicting High-Resolution Turbulence Details in Space and Time" is available here: http://www.geometry.caltech.edu/pubs/BWDL21.pdf 📝 Wavelet Turbulence - one of the best papers ever written (in my opinion): https://www.cs.cornell.edu/~tedkim/WTURB/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today is going to be all about turbulence. We are going to do this, and we are going to do this, and this, and this. To understand what this new paper has to offer, we have to look at an earlier work named Wavelet Turbulence. Apart from the fact that in my opinion it is one of the best papers ever written, it could do one thing extremely well. In goes a course, a low resolution smoke or fluid simulation, and outcomes a proper simulation with a lot more details. And it is typically accurate enough to fool us. And all this was possible in 2008. Kind of boggles the mind, so much so that this work even won a technical Oscar award. Please remember this accuracy statement as it will matter a great deal. Once again, it is accurate enough to fool us. Now, more than a decade later, here we are, this new paper is leaps and bounds better and can do five amazing things that the previous methods couldn't do. One, it can do spatial temporal upsandpling, upsandpling both in space and in time. What does this mean? It means that in goes a choppy, low resolution simulation and outcomes a smooth detailed simulation. Wow, now that is incredible. It is really able to fill in the information not only in space, but in time too. So good. Now, two previous methods typically try to take this course input simulation and add something to it. But not this new one. No, no. This new method creates a fundamentally new simulation from it. Just look here, it didn't just add a few more details to the input simulation. This is a completely new work. That is quite a difference. Three, I mentioned that wavelet turbulence was accurate enough to fool us. So, I wonder how accurate this new method is. Well, that's what we are here for. So, let's see together. Here comes the choppy input simulation. And here is an upsandpling technique from a bit more than a year ago. Well, better, yes, but the output is not smooth. I would characterize it more as less choppy. And let's see the new method. Can it do any better? My goodness, look at that. That is a smooth and creamy animation with tons of details. Now, let's pop the question. How does it compare to the reference simulation reality, if you will? What? I cannot believe my eyes. I can't tell the difference at all. So, this new technique is not just accurate enough to fool the human eye. This is accurate enough to stand up to the real high resolution simulation. And all this improvement in just one year. What a time to be alive. But wait a second. Does this even make sense if we have the real reference simulation here? Why do we need the upsandpling technique? Why not just use the reference? Well, it makes sense. The plan is that we only need to compute the cheap core simulation, upsandpling it quickly, and hope that it is as good as the reference simulation, which takes a great deal longer. Well, okay. But how much longer? Now hold on to your papers and let's see the results. And yes, this is about 5 to 8 times faster than creating the high resolution simulation we compared to, which is absolutely amazing, especially that it was created with a modern, blazing fast reference simulator that runs on your graphics card. But wait, remember I promised you 5 advantages. Not 3. So, what are the remaining 2? Well, 4, it can perform compression. Meaning that the output simulation will take up 600 times less data on our disk, that is insanity. So how much worse is the simulation stored this way? My goodness, look at that. It looks nearly the same as the original one. Wow. And we are still not done yet, not even close. 5. It also works on super high Reynolds numbers. If we have some of the more difficult cases where there is tons of turbulence, it still works really well. This typically gives a lot of trouble to previous techniques. Now, one more important thing, views are of course not everything. However, I couldn't not notice that this work was only seen by 127 people. Yes, I am not kidding, 127 people. This is why I am worried that if we don't talk about it here on 2 minute papers, almost no one will talk about it. And these works are so good, people have to know. Thank you very much for watching this, and let's spread the word together. Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers and start using their system for free today. Our thanks to perceptilebs for their support and for helping us make better videos for you. Thank you very much for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 4.76, "end": 8.0, "text": " Today is going to be all about turbulence."}, {"start": 8.0, "end": 16.04, "text": " We are going to do this, and we are going to do this, and this, and this."}, {"start": 16.04, "end": 23.48, "text": " To understand what this new paper has to offer, we have to look at an earlier work named Wavelet Turbulence."}, {"start": 23.48, "end": 31.8, "text": " Apart from the fact that in my opinion it is one of the best papers ever written, it could do one thing extremely well."}, {"start": 31.8, "end": 40.6, "text": " In goes a course, a low resolution smoke or fluid simulation, and outcomes a proper simulation with a lot more details."}, {"start": 40.6, "end": 44.36, "text": " And it is typically accurate enough to fool us."}, {"start": 44.36, "end": 48.2, "text": " And all this was possible in 2008."}, {"start": 48.2, "end": 55.24, "text": " Kind of boggles the mind, so much so that this work even won a technical Oscar award."}, {"start": 55.24, "end": 59.88, "text": " Please remember this accuracy statement as it will matter a great deal."}, {"start": 59.88, "end": 63.56, "text": " Once again, it is accurate enough to fool us."}, {"start": 63.56, "end": 74.36, "text": " Now, more than a decade later, here we are, this new paper is leaps and bounds better and can do five amazing things that the previous methods couldn't do."}, {"start": 74.36, "end": 82.2, "text": " One, it can do spatial temporal upsandpling, upsandpling both in space and in time."}, {"start": 82.2, "end": 83.48, "text": " What does this mean?"}, {"start": 83.48, "end": 91.72, "text": " It means that in goes a choppy, low resolution simulation and outcomes a smooth detailed simulation."}, {"start": 91.72, "end": 94.6, "text": " Wow, now that is incredible."}, {"start": 94.6, "end": 101.56, "text": " It is really able to fill in the information not only in space, but in time too."}, {"start": 101.56, "end": 111.56, "text": " So good. Now, two previous methods typically try to take this course input simulation and add something to it."}, {"start": 111.56, "end": 113.56, "text": " But not this new one."}, {"start": 113.56, "end": 114.52000000000001, "text": " No, no."}, {"start": 114.52000000000001, "end": 118.76, "text": " This new method creates a fundamentally new simulation from it."}, {"start": 118.76, "end": 123.80000000000001, "text": " Just look here, it didn't just add a few more details to the input simulation."}, {"start": 123.80000000000001, "end": 126.92, "text": " This is a completely new work."}, {"start": 126.92, "end": 129.08, "text": " That is quite a difference."}, {"start": 129.08, "end": 135.0, "text": " Three, I mentioned that wavelet turbulence was accurate enough to fool us."}, {"start": 135.0, "end": 139.08, "text": " So, I wonder how accurate this new method is."}, {"start": 139.08, "end": 141.08, "text": " Well, that's what we are here for."}, {"start": 141.08, "end": 143.0, "text": " So, let's see together."}, {"start": 143.0, "end": 146.44, "text": " Here comes the choppy input simulation."}, {"start": 146.44, "end": 151.56, "text": " And here is an upsandpling technique from a bit more than a year ago."}, {"start": 151.56, "end": 156.12, "text": " Well, better, yes, but the output is not smooth."}, {"start": 156.12, "end": 159.56, "text": " I would characterize it more as less choppy."}, {"start": 159.56, "end": 161.64000000000001, "text": " And let's see the new method."}, {"start": 161.64000000000001, "end": 164.12, "text": " Can it do any better?"}, {"start": 164.12, "end": 167.16, "text": " My goodness, look at that."}, {"start": 167.16, "end": 172.52, "text": " That is a smooth and creamy animation with tons of details."}, {"start": 172.52, "end": 174.44, "text": " Now, let's pop the question."}, {"start": 174.44, "end": 179.96, "text": " How does it compare to the reference simulation reality, if you will?"}, {"start": 179.96, "end": 180.96, "text": " What?"}, {"start": 180.96, "end": 183.16, "text": " I cannot believe my eyes."}, {"start": 183.16, "end": 185.64000000000001, "text": " I can't tell the difference at all."}, {"start": 185.64, "end": 191.23999999999998, "text": " So, this new technique is not just accurate enough to fool the human eye."}, {"start": 191.23999999999998, "end": 196.83999999999997, "text": " This is accurate enough to stand up to the real high resolution simulation."}, {"start": 196.83999999999997, "end": 200.32, "text": " And all this improvement in just one year."}, {"start": 200.32, "end": 202.39999999999998, "text": " What a time to be alive."}, {"start": 202.39999999999998, "end": 204.39999999999998, "text": " But wait a second."}, {"start": 204.39999999999998, "end": 209.64, "text": " Does this even make sense if we have the real reference simulation here?"}, {"start": 209.64, "end": 212.48, "text": " Why do we need the upsandpling technique?"}, {"start": 212.48, "end": 214.48, "text": " Why not just use the reference?"}, {"start": 214.48, "end": 216.35999999999999, "text": " Well, it makes sense."}, {"start": 216.35999999999999, "end": 223.23999999999998, "text": " The plan is that we only need to compute the cheap core simulation, upsandpling it quickly,"}, {"start": 223.23999999999998, "end": 229.44, "text": " and hope that it is as good as the reference simulation, which takes a great deal longer."}, {"start": 229.44, "end": 230.88, "text": " Well, okay."}, {"start": 230.88, "end": 233.07999999999998, "text": " But how much longer?"}, {"start": 233.07999999999998, "end": 237.64, "text": " Now hold on to your papers and let's see the results."}, {"start": 237.64, "end": 244.44, "text": " And yes, this is about 5 to 8 times faster than creating the high resolution simulation"}, {"start": 244.44, "end": 251.44, "text": " we compared to, which is absolutely amazing, especially that it was created with a modern,"}, {"start": 251.44, "end": 256.24, "text": " blazing fast reference simulator that runs on your graphics card."}, {"start": 256.24, "end": 260.68, "text": " But wait, remember I promised you 5 advantages."}, {"start": 260.68, "end": 261.68, "text": " Not 3."}, {"start": 261.68, "end": 264.4, "text": " So, what are the remaining 2?"}, {"start": 264.4, "end": 268.0, "text": " Well, 4, it can perform compression."}, {"start": 268.0, "end": 274.52, "text": " Meaning that the output simulation will take up 600 times less data on our disk, that"}, {"start": 274.52, "end": 276.24, "text": " is insanity."}, {"start": 276.24, "end": 280.96, "text": " So how much worse is the simulation stored this way?"}, {"start": 280.96, "end": 283.56, "text": " My goodness, look at that."}, {"start": 283.56, "end": 287.16, "text": " It looks nearly the same as the original one."}, {"start": 287.16, "end": 288.4, "text": " Wow."}, {"start": 288.4, "end": 292.2, "text": " And we are still not done yet, not even close."}, {"start": 292.2, "end": 293.44, "text": " 5."}, {"start": 293.44, "end": 296.96, "text": " It also works on super high Reynolds numbers."}, {"start": 296.96, "end": 302.64, "text": " If we have some of the more difficult cases where there is tons of turbulence, it still"}, {"start": 302.64, "end": 304.44, "text": " works really well."}, {"start": 304.44, "end": 307.56, "text": " This typically gives a lot of trouble to previous techniques."}, {"start": 307.56, "end": 313.28, "text": " Now, one more important thing, views are of course not everything."}, {"start": 313.28, "end": 321.15999999999997, "text": " However, I couldn't not notice that this work was only seen by 127 people."}, {"start": 321.15999999999997, "end": 326.28, "text": " Yes, I am not kidding, 127 people."}, {"start": 326.28, "end": 331.23999999999995, "text": " This is why I am worried that if we don't talk about it here on 2 minute papers, almost"}, {"start": 331.23999999999995, "end": 333.4, "text": " no one will talk about it."}, {"start": 333.4, "end": 337.32, "text": " And these works are so good, people have to know."}, {"start": 337.32, "end": 341.76, "text": " Thank you very much for watching this, and let's spread the word together."}, {"start": 341.76, "end": 347.84, "text": " Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as"}, {"start": 347.84, "end": 350.03999999999996, "text": " intuitive as possible."}, {"start": 350.03999999999996, "end": 355.32, "text": " This gives you a faster way to build out models with more transparency into how your model"}, {"start": 355.32, "end": 360.08, "text": " is architected, how it performs, and how to debug it."}, {"start": 360.08, "end": 365.92, "text": " And it even generates visualizations for all the model variables and gives you recommendations"}, {"start": 365.92, "end": 371.44, "text": " both during modeling and training and does all this automatically."}, {"start": 371.44, "end": 376.28, "text": " I only wish I had a tool like this when I was working on my neural networks during my"}, {"start": 376.28, "end": 377.88, "text": " PhD years."}, {"start": 377.88, "end": 384.0, "text": " Visit perceptilebs.com slash papers and start using their system for free today."}, {"start": 384.0, "end": 388.72, "text": " Our thanks to perceptilebs for their support and for helping us make better videos for"}, {"start": 388.72, "end": 389.72, "text": " you."}, {"start": 389.72, "end": 417.16, "text": " Thank you very much for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=b2D_5G_npVI
Next Level Paint Simulations Are Coming! 🎨🖌️
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "Practical Pigment Mixing for Digital Painting" is available here: https://scrtwpns.com/mixbox/ Tweet at me if you used this to paint something cool! - https://twitter.com/twominutepapers ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Hmm, as you see, this is not our usual intro. Why is that? Because today, we are going to simulate the process of real painting on a computer, and we'll find out that all previous techniques mixed paint incorrectly and finally create this beautiful image. Now, there are plenty of previous techniques that help us paint digitally, so why do we need a new paper for this? Well, believe it or not, these previous methods think differently about the shape that the paint has to take, the more sophisticated ones even simulate the diffusion of paint, which is fantastic. They all do these things a little differently, but they agree on one thing. And here comes the problem. The only thing they agree on is that blue plus yellow equals a creamy color. But wait a second, let's actually try this. Many of you already know what is coming. Of course, in real life, blue plus yellow is not a creamy color, it is green. And does the new method know this? Yes, it does. But only this one, and we can try similar experiments over and over again. The results are the same. So, how is this even possible? Does no one know that blue plus yellow equals green? Well, of course they know. But the proper simulation of pigments mixing is very challenging. For instance, it requires keeping track of pigment concentrations and we also have to simulate subsurface scattering, which is the absorption and scattering of light of these pigments. In some critical applications, just this part can take several hours to days to compute for a challenging case. And now, hold on to your papers because this new technique can do all this correctly and in real time. I love this visualization as it is really dense in information, super easy to read at the same time. That is quite a challenge. And in my opinion, just this one figure could win an award by itself. As you see, most of the time it runs easily with 60 or higher frames per second. And even in the craziest cases, it can compute all this about 30 times per second. That is insanity. So, what do we get for all this effort, a more realistic digital painting experience? For instance, with the new method, color mixing now feels absolutely amazing. And if we feel like applying a ton of paint and let it stain the paper, we get something much more lifelike too. Artists who try this will appreciate these a great deal I am sure. Especially that the authors also made this paint color mixing technique available for everyone free of charge. As we noted, computing this kind of paint mixing simulation is not easy. However, using the final technique is, on the other hand, extremely easy. As easy as it gets. If you feel like coding up a simulation and include this method in it, this is all you need to do. Very few paper implementations are this simple to use. It can see us all the mathematical difficulties away from you. Extra points for elegance and huge congratulations to the authors. Now, after nearly every two minute paper's episode, where we showcase such an amazing paper, I get a question saying something like, okay, but when do I get to see or use this in the real world? And of course, rightfully so, that is a good question. For instance, this previous Gaugen paper was published in 2019, and here we are just a bit more than two years later, and it has been transferred into a real product. Some machine learning papers also made it to Tesla's self-driving cars in one or two years, so tech transfer from research to real products is real. But, is it real with this technique? Yes, it is. So much so that it is already available in an app named Rebell 5, which offers a next level digital painting experience. It even simulates different kinds of papers and how they absorb paint. Hmm, a paper simulation, you say? Yes, here are two minute papers. We appreciate that. A great deal. If you use this to paint something, please make sure to leave a comment or tweet at me. I would love to see your scholarly paintings. What a time to be alive. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 9.6, "text": " Hmm, as you see, this is not our usual intro."}, {"start": 9.6, "end": 11.120000000000001, "text": " Why is that?"}, {"start": 11.120000000000001, "end": 17.0, "text": " Because today, we are going to simulate the process of real painting on a computer,"}, {"start": 17.0, "end": 22.2, "text": " and we'll find out that all previous techniques mixed paint incorrectly"}, {"start": 22.2, "end": 25.48, "text": " and finally create this beautiful image."}, {"start": 25.48, "end": 29.96, "text": " Now, there are plenty of previous techniques that help us paint digitally,"}, {"start": 29.96, "end": 33.28, "text": " so why do we need a new paper for this?"}, {"start": 33.28, "end": 38.04, "text": " Well, believe it or not, these previous methods think differently"}, {"start": 38.04, "end": 40.92, "text": " about the shape that the paint has to take,"}, {"start": 40.92, "end": 47.08, "text": " the more sophisticated ones even simulate the diffusion of paint, which is fantastic."}, {"start": 47.08, "end": 53.16, "text": " They all do these things a little differently, but they agree on one thing."}, {"start": 53.16, "end": 55.32, "text": " And here comes the problem."}, {"start": 55.32, "end": 61.88, "text": " The only thing they agree on is that blue plus yellow equals a creamy color."}, {"start": 61.88, "end": 66.2, "text": " But wait a second, let's actually try this."}, {"start": 66.2, "end": 69.0, "text": " Many of you already know what is coming."}, {"start": 69.0, "end": 75.96000000000001, "text": " Of course, in real life, blue plus yellow is not a creamy color, it is green."}, {"start": 75.96000000000001, "end": 80.52, "text": " And does the new method know this?"}, {"start": 80.52, "end": 81.88, "text": " Yes, it does."}, {"start": 81.88, "end": 88.11999999999999, "text": " But only this one, and we can try similar experiments over and over again."}, {"start": 88.11999999999999, "end": 90.28, "text": " The results are the same."}, {"start": 90.28, "end": 92.75999999999999, "text": " So, how is this even possible?"}, {"start": 92.75999999999999, "end": 96.36, "text": " Does no one know that blue plus yellow equals green?"}, {"start": 96.36, "end": 98.19999999999999, "text": " Well, of course they know."}, {"start": 98.19999999999999, "end": 102.91999999999999, "text": " But the proper simulation of pigments mixing is very challenging."}, {"start": 102.91999999999999, "end": 107.24, "text": " For instance, it requires keeping track of pigment concentrations"}, {"start": 107.24, "end": 112.19999999999999, "text": " and we also have to simulate subsurface scattering, which is the absorption"}, {"start": 112.19999999999999, "end": 115.24, "text": " and scattering of light of these pigments."}, {"start": 115.24, "end": 119.64, "text": " In some critical applications, just this part can take several hours"}, {"start": 119.64, "end": 122.75999999999999, "text": " to days to compute for a challenging case."}, {"start": 122.75999999999999, "end": 128.12, "text": " And now, hold on to your papers because this new technique can do all this correctly"}, {"start": 128.12, "end": 130.2, "text": " and in real time."}, {"start": 130.2, "end": 134.84, "text": " I love this visualization as it is really dense in information,"}, {"start": 134.84, "end": 137.64000000000001, "text": " super easy to read at the same time."}, {"start": 137.64000000000001, "end": 139.4, "text": " That is quite a challenge."}, {"start": 139.4, "end": 144.92000000000002, "text": " And in my opinion, just this one figure could win an award by itself."}, {"start": 144.92000000000002, "end": 151.16, "text": " As you see, most of the time it runs easily with 60 or higher frames per second."}, {"start": 151.16, "end": 157.88, "text": " And even in the craziest cases, it can compute all this about 30 times per second."}, {"start": 157.88, "end": 160.12, "text": " That is insanity."}, {"start": 160.12, "end": 166.20000000000002, "text": " So, what do we get for all this effort, a more realistic digital painting experience?"}, {"start": 166.20000000000002, "end": 171.08, "text": " For instance, with the new method, color mixing now feels absolutely amazing."}, {"start": 171.88, "end": 177.32, "text": " And if we feel like applying a ton of paint and let it stain the paper,"}, {"start": 177.32, "end": 180.36, "text": " we get something much more lifelike too."}, {"start": 180.36, "end": 184.44, "text": " Artists who try this will appreciate these a great deal I am sure."}, {"start": 184.44, "end": 191.07999999999998, "text": " Especially that the authors also made this paint color mixing technique available for everyone"}, {"start": 191.07999999999998, "end": 192.35999999999999, "text": " free of charge."}, {"start": 192.35999999999999, "end": 197.0, "text": " As we noted, computing this kind of paint mixing simulation is not easy."}, {"start": 197.64, "end": 202.76, "text": " However, using the final technique is, on the other hand, extremely easy."}, {"start": 202.76, "end": 204.52, "text": " As easy as it gets."}, {"start": 204.52, "end": 209.4, "text": " If you feel like coding up a simulation and include this method in it,"}, {"start": 209.4, "end": 211.07999999999998, "text": " this is all you need to do."}, {"start": 211.08, "end": 214.76000000000002, "text": " Very few paper implementations are this simple to use."}, {"start": 214.76000000000002, "end": 218.76000000000002, "text": " It can see us all the mathematical difficulties away from you."}, {"start": 218.76000000000002, "end": 223.32000000000002, "text": " Extra points for elegance and huge congratulations to the authors."}, {"start": 223.32000000000002, "end": 229.88000000000002, "text": " Now, after nearly every two minute paper's episode, where we showcase such an amazing paper,"}, {"start": 229.88000000000002, "end": 237.48000000000002, "text": " I get a question saying something like, okay, but when do I get to see or use this in the real world?"}, {"start": 237.48, "end": 241.39999999999998, "text": " And of course, rightfully so, that is a good question."}, {"start": 241.39999999999998, "end": 249.39999999999998, "text": " For instance, this previous Gaugen paper was published in 2019, and here we are just a bit more"}, {"start": 249.39999999999998, "end": 254.2, "text": " than two years later, and it has been transferred into a real product."}, {"start": 254.2, "end": 261.15999999999997, "text": " Some machine learning papers also made it to Tesla's self-driving cars in one or two years,"}, {"start": 261.15999999999997, "end": 265.32, "text": " so tech transfer from research to real products is real."}, {"start": 265.32, "end": 268.28, "text": " But, is it real with this technique?"}, {"start": 268.28, "end": 269.48, "text": " Yes, it is."}, {"start": 269.48, "end": 278.52, "text": " So much so that it is already available in an app named Rebell 5, which offers a next level digital painting experience."}, {"start": 278.52, "end": 284.36, "text": " It even simulates different kinds of papers and how they absorb paint."}, {"start": 284.36, "end": 287.4, "text": " Hmm, a paper simulation, you say?"}, {"start": 287.4, "end": 290.2, "text": " Yes, here are two minute papers."}, {"start": 290.2, "end": 291.32, "text": " We appreciate that."}, {"start": 291.32, "end": 292.03999999999996, "text": " A great deal."}, {"start": 292.04, "end": 297.88, "text": " If you use this to paint something, please make sure to leave a comment or tweet at me."}, {"start": 297.88, "end": 300.6, "text": " I would love to see your scholarly paintings."}, {"start": 300.6, "end": 302.28000000000003, "text": " What a time to be alive."}, {"start": 302.28000000000003, "end": 305.56, "text": " This video has been supported by weights and biases."}, {"start": 305.56, "end": 311.72, "text": " They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts"}, {"start": 311.72, "end": 317.08000000000004, "text": " who discuss how they use learning based algorithms to solve real world problems."}, {"start": 317.08, "end": 324.59999999999997, "text": " They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more."}, {"start": 324.59999999999997, "end": 327.64, "text": " Perfect for a fellow scholar with an open mind."}, {"start": 327.64, "end": 335.08, "text": " Make sure to visit them through wnb.me slash gd or just click the link in the video description."}, {"start": 335.08, "end": 338.52, "text": " Our thanks to weights and biases for their longstanding support"}, {"start": 338.52, "end": 341.4, "text": " and for helping us make better videos for you."}, {"start": 341.4, "end": 350.52, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=uBGY9-GaSdo
Adobe's New Simulation: Bunnies Everywhere! 🐰
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "FrictionalMonolith: A Monolithic Optimization-based Approach for Granular Flow with Contact-Aware Rigid-Body " is available here: https://tetsuya-takahashi.github.io/FrictionalMonolith/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
And dear fellow scholars, this is two minute papers with Dr. Karojona Ifehir. Today is going to be all about simulating virtual bunnies. What kinds of bunnies? Bunnies in an hourglass with granular material, bunnies in a mixing drum, bunnies that disappear and will try to fix this one too. So, what was all this footage? Well, this is a follow-up work to the amazing monolith paper. What is monolith? It is a technique that helps fixing commonly occurring two-way coupling issues in physics simulations. Okay, that sounds great, but what does two-way coupling mean? It means that here the boxes are allowed to move the smoke and the added two-way coupling part means that now the smoke is also allowed to blow away the boxes. This previous work simulates this phenomena properly. It also makes sure that when thrown at the wall, things stick correctly and a ton of other goodies too. So, this previous method shows a lot of strength. Now, I hear you asking, Karoj, can these get even better? And the answer is, yes it can. That's exactly why we are here today. This new paper improves this technique to work better in cases where we have a lot of friction. For instance, it can simulate how some of these tiny bunnies get squeezed through the argless and get showered by this sand-like granular material. It can also simulate how some of them remain stuck up there because of the frictional contact. Now, have a look at this. With an earlier technique, we start with one bunny and we end up with, wait a minute, that volume is not one bunny amount of volume. And this is what we call the volume dissipation problem. I wonder if we can get our bunny back with the new technique. What do you think? Well, let's see, one bunny goes in, friction happens and yes, one bunny amount of volume comes out of the simulation. Then, we put a bunch of them into a mixing drum in the next experiment where their tormenting shall continue. This is also a very challenging scene because we have over 70,000 particles rubbing against each other. And just look at that. The new technique is so robust that there are no issues whatsoever. Loving it. So, what else is all the simulation math good for? Well, for instance, it helps us set up a scene where we get a great deal of artistic freedom. For instance, we can put this glass container with the water between two walls and look carefully. Yes, we apply a little left-ward force to this wall. And since this technique can simulate what is going to happen, we can create an imaginary world where only our creativity is the limit. For instance, we can make a world in which there is just a tiny bit of friction to slow down the fall. Or we can create a world with a ton more friction. And now, we have so much friction going on that the weight of the liquid cannot overcome anymore and thus the cube is quickly brought to rest between the walls. Luckily, since our bunny still exists, we can proceed onto the next experiment where we will drop it into a pile of granular material. And this previous method did not do too well in this case as the bunny sinks down. And if you think that cannot possibly get any worse, well, I have some news for you. It can. How? Look, with a different previous technique, it doesn't even get to sink in because… Ouch! It crashes when it would need to perform these calculations. Now, let's see if the new method can save the day and… Yes, great! It indeed can help our little bunny remain on top of things. So now, let's pop the scholarly question. How much do we have to wait for such a simulation? The hourglass experiment takes about 5 minutes per frame while the rotating drum experiment takes about half of that 2 and a half minutes. So, it takes a while. Why? Of course, because many of these scenes contain tens of thousands of particles and almost all of them are in constant, frictional interaction with almost all the others at the same time, and the algorithm mustn't miss any of these interactions. All of them have to be simulated. And the fact that through the power of computer graphics research, we can simulate all of these in a reasonable amount of time is an absolute miracle. What a time to be alive! And as always, you know what's coming? Yes, please do not forget to invoke the first law of papers which says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. Granular materials and frictional contact in a matter of seconds perhaps… Well, sign me up for this one. This video has been supported by weights and biases. Check out the recent offering fully connected a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnbe.me slash papers or just click the link in the video description. Thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " And dear fellow scholars, this is two minute papers with Dr. Karojona Ifehir."}, {"start": 4.72, "end": 9.44, "text": " Today is going to be all about simulating virtual bunnies."}, {"start": 9.44, "end": 16.4, "text": " What kinds of bunnies? Bunnies in an hourglass with granular material, bunnies in a mixing drum,"}, {"start": 16.4, "end": 20.56, "text": " bunnies that disappear and will try to fix this one too."}, {"start": 20.56, "end": 23.12, "text": " So, what was all this footage?"}, {"start": 23.12, "end": 28.16, "text": " Well, this is a follow-up work to the amazing monolith paper."}, {"start": 28.16, "end": 34.96, "text": " What is monolith? It is a technique that helps fixing commonly occurring two-way coupling issues"}, {"start": 34.96, "end": 41.84, "text": " in physics simulations. Okay, that sounds great, but what does two-way coupling mean?"}, {"start": 41.84, "end": 48.32, "text": " It means that here the boxes are allowed to move the smoke and the added two-way coupling"}, {"start": 48.32, "end": 54.32, "text": " part means that now the smoke is also allowed to blow away the boxes."}, {"start": 54.32, "end": 61.52, "text": " This previous work simulates this phenomena properly. It also makes sure that when thrown at the wall,"}, {"start": 61.52, "end": 65.36, "text": " things stick correctly and a ton of other goodies too."}, {"start": 66.32, "end": 70.32, "text": " So, this previous method shows a lot of strength."}, {"start": 70.32, "end": 75.28, "text": " Now, I hear you asking, Karoj, can these get even better?"}, {"start": 75.28, "end": 80.08, "text": " And the answer is, yes it can. That's exactly why we are here today."}, {"start": 80.08, "end": 86.64, "text": " This new paper improves this technique to work better in cases where we have a lot of friction."}, {"start": 87.2, "end": 93.12, "text": " For instance, it can simulate how some of these tiny bunnies get squeezed through the"}, {"start": 93.12, "end": 100.64, "text": " argless and get showered by this sand-like granular material. It can also simulate how some of"}, {"start": 100.64, "end": 104.48, "text": " them remain stuck up there because of the frictional contact."}, {"start": 104.48, "end": 112.32000000000001, "text": " Now, have a look at this. With an earlier technique, we start with one bunny and we end up with,"}, {"start": 112.32000000000001, "end": 118.48, "text": " wait a minute, that volume is not one bunny amount of volume."}, {"start": 118.48, "end": 122.0, "text": " And this is what we call the volume dissipation problem."}, {"start": 122.0, "end": 127.04, "text": " I wonder if we can get our bunny back with the new technique. What do you think?"}, {"start": 127.04, "end": 138.0, "text": " Well, let's see, one bunny goes in, friction happens and yes, one bunny amount of volume comes out"}, {"start": 138.0, "end": 144.08, "text": " of the simulation. Then, we put a bunch of them into a mixing drum in the next experiment"}, {"start": 144.08, "end": 151.92000000000002, "text": " where their tormenting shall continue. This is also a very challenging scene because we have over 70,000"}, {"start": 151.92, "end": 159.28, "text": " particles rubbing against each other. And just look at that. The new technique is so robust"}, {"start": 159.28, "end": 167.35999999999999, "text": " that there are no issues whatsoever. Loving it. So, what else is all the simulation math good for?"}, {"start": 167.35999999999999, "end": 173.27999999999997, "text": " Well, for instance, it helps us set up a scene where we get a great deal of artistic freedom."}, {"start": 174.0, "end": 181.35999999999999, "text": " For instance, we can put this glass container with the water between two walls and look carefully."}, {"start": 181.36, "end": 188.0, "text": " Yes, we apply a little left-ward force to this wall. And since this technique can simulate what is"}, {"start": 188.0, "end": 194.32000000000002, "text": " going to happen, we can create an imaginary world where only our creativity is the limit."}, {"start": 194.32000000000002, "end": 200.48000000000002, "text": " For instance, we can make a world in which there is just a tiny bit of friction to slow down the fall."}, {"start": 201.44000000000003, "end": 208.16000000000003, "text": " Or we can create a world with a ton more friction. And now, we have so much friction going on"}, {"start": 208.16, "end": 214.4, "text": " that the weight of the liquid cannot overcome anymore and thus the cube is quickly brought to rest"}, {"start": 214.4, "end": 221.28, "text": " between the walls. Luckily, since our bunny still exists, we can proceed onto the next experiment"}, {"start": 221.28, "end": 227.68, "text": " where we will drop it into a pile of granular material. And this previous method did not do too well"}, {"start": 227.68, "end": 234.24, "text": " in this case as the bunny sinks down. And if you think that cannot possibly get any worse,"}, {"start": 234.24, "end": 241.04000000000002, "text": " well, I have some news for you. It can. How? Look, with a different previous technique,"}, {"start": 241.04000000000002, "end": 248.0, "text": " it doesn't even get to sink in because\u2026 Ouch! It crashes when it would need to perform these"}, {"start": 248.0, "end": 257.2, "text": " calculations. Now, let's see if the new method can save the day and\u2026 Yes, great! It indeed can"}, {"start": 257.2, "end": 264.88, "text": " help our little bunny remain on top of things. So now, let's pop the scholarly question. How much"}, {"start": 264.88, "end": 271.44, "text": " do we have to wait for such a simulation? The hourglass experiment takes about 5 minutes per frame"}, {"start": 271.44, "end": 278.24, "text": " while the rotating drum experiment takes about half of that 2 and a half minutes. So, it takes a while."}, {"start": 279.12, "end": 285.68, "text": " Why? Of course, because many of these scenes contain tens of thousands of particles and almost all"}, {"start": 285.68, "end": 292.08, "text": " of them are in constant, frictional interaction with almost all the others at the same time,"}, {"start": 292.08, "end": 298.40000000000003, "text": " and the algorithm mustn't miss any of these interactions. All of them have to be simulated."}, {"start": 298.40000000000003, "end": 303.68, "text": " And the fact that through the power of computer graphics research, we can simulate all of these"}, {"start": 303.68, "end": 310.88, "text": " in a reasonable amount of time is an absolute miracle. What a time to be alive! And as always,"}, {"start": 310.88, "end": 317.36, "text": " you know what's coming? Yes, please do not forget to invoke the first law of papers which says"}, {"start": 317.36, "end": 323.6, "text": " that research is a process. Do not look at where we are, look at where we will be two more papers"}, {"start": 323.6, "end": 331.36, "text": " down the line. Granular materials and frictional contact in a matter of seconds perhaps\u2026 Well,"}, {"start": 331.36, "end": 337.04, "text": " sign me up for this one. This video has been supported by weights and biases. Check out the"}, {"start": 337.04, "end": 342.88, "text": " recent offering fully connected a place where they bring machine learning practitioners together"}, {"start": 342.88, "end": 349.92, "text": " to share and discuss their ideas, learn from industry leaders, and even collaborate on projects"}, {"start": 349.92, "end": 355.68, "text": " together. You see, I get messages from you fellow scholars telling me that you have been inspired"}, {"start": 355.68, "end": 362.96000000000004, "text": " by the series, but don't really know where to start. And here it is. Fully connected is a great"}, {"start": 362.96, "end": 369.2, "text": " way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to"}, {"start": 369.2, "end": 377.03999999999996, "text": " a conference, and more. Make sure to visit them through wnbe.me slash papers or just click the link"}, {"start": 377.03999999999996, "end": 382.79999999999995, "text": " in the video description. Thanks to weights and biases for their longstanding support and for"}, {"start": 382.79999999999995, "end": 387.12, "text": " helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 387.12, "end": 397.12, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=v8n4JEMJrrU
Wow, A Simulation That Matches Reality! 🤯
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Fast and Versatile Fluid-Solid Coupling for Turbulent Flow Simulation" is available here: http://www.geometry.caltech.edu/pubs/LLDL21.pdf ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Credits from the paper: Ansys Inc. 3D meshes were provided by GrabCAD users Aisak (Fig. 1), Mehmet Boztaş (turbine blade in Fig. 20), Vedad Saletovic (turbine tower in Fig. 20), Dhanasekar Vinayagamoorthy (Fig. 22), and CustomWorkx Belgium (Fig. 25), as well as Sketchfab users Cosche (Fig. 4), Opus Poly (Fig. 21), and Lexyc16 (Fig. 27). 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to look at an incredible new paper, where we are going to simulate flows around thin shells, rods, the wind blowing at leaves, air flow through a city, and get this, we will produce spiral vortices around the aircraft and even perform some wind tunnel simulations. Now, many of our previous videos are about real-time fluid simulation methods that are often used in computer games and others that run for a long time and are typically used in feature-length movies. But not this, this work is not for that. This new paper can simulate difficult coupling effects, which means that it can deal with the movement of air currents around this thin shell. We can also take a hairbrush without bristles and have an interesting simulation. Or even better, add tiny bristles to its geometry and see how much more of a turbulent flow it creates. I already love this paper. So good! Now onwards to more serious applications. So, why simulate thin shells? What is all this useful for? Well, of course, simulating wind turbines. That is an excellent application. We are getting there. Hmm, talking about thin shells. What else? Of course, aircraft. Now, hold on to your papers and marvel at these beautiful spiral vortices that this simulator can create. So, is this all for show or is this what would really happen in reality? We will take a closer look at that in a moment. With that, yes, the authors claim that this is much more accurate than previous methods in its class. Well done! Let's give it a devilishly difficult test, a real aerodynamic simulation in a wind tunnel. In these cases, getting really accurate results is critical. For instance, here we would like to see that if we were to add a spoiler to this car, how much of an aerodynamic advantage we would get in return. Here are the results from the real wind tunnel test. And now, hold on to your papers and let's see how the new method compares. Wow! Goodness! It is not perfect by any means, but seems accurate enough that we can see the weak flow of the car clearly enough so that we can make a decision on that spoiler. You know what? Let's keep it. So, we compared the simulation against reality. Now, let's compare a simulation against another simulation. So, against a previous method. Yes, the new one is significantly more accurate and then its predecessor. Why? Because the previous one introduces significant boundary layers separations at the top of the car. The new one says that this is what will happen in reality. So, how do we know? Of course, we check. Yes, that is indeed right. Absolutely amazing. And note that this work appeared just about a year ago. So much improvement in so little time. The pace of progress in computer graphics research is out of this world. Okay, so it is accurate. Really accurate. But we already have accurate algorithms. So, how fast is it? Well, the proper aerodynamic simulations take from days to weeks to compute, but this airplane simulation took only minutes. How many minutes? 60 minutes to be exact. Wow! And within these 60 minutes, the same spiral vertices show up as the ones in the real wind tunnel tests. So good. But really, how can this be so fast? How is this even possible? The key to doing all this faster is that the new proposed method is massively parallel. This means that it can slice up one big computing task into many small ones that can be taken care of independently. And as a result, it takes advantage of our graphics cards and can squeeze every drop of performance out of it. That is very challenging for an algorithm of this complexity. Of course, the final simulation should still be done with the old tools just to make sure. However, this new method will be able to help engineers quickly iterate on early ideas and only commit to weak long simulations when absolutely necessary. So, we can go from idea to conclusions in a matter of an hour, not in a matter of a week. This will be an amazing time saver so the engineers can try more designs. Testing more ideas leads to more knowledge that leads to better vehicles, better wind turbines. Loving it. And all this improvement in just one year. If I didn't see this with my own eyes, I could not believe this. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances. And hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 7.68, "text": " Today we are going to look at an incredible new paper,"}, {"start": 7.68, "end": 13.36, "text": " where we are going to simulate flows around thin shells, rods,"}, {"start": 13.36, "end": 18.240000000000002, "text": " the wind blowing at leaves, air flow through a city, and get this,"}, {"start": 18.240000000000002, "end": 24.88, "text": " we will produce spiral vortices around the aircraft and even perform some wind tunnel simulations."}, {"start": 24.88, "end": 29.6, "text": " Now, many of our previous videos are about real-time fluid simulation methods"}, {"start": 29.6, "end": 35.36, "text": " that are often used in computer games and others that run for a long time"}, {"start": 35.36, "end": 38.64, "text": " and are typically used in feature-length movies."}, {"start": 38.64, "end": 41.52, "text": " But not this, this work is not for that."}, {"start": 41.52, "end": 44.800000000000004, "text": " This new paper can simulate difficult coupling effects,"}, {"start": 44.800000000000004, "end": 50.16, "text": " which means that it can deal with the movement of air currents around this thin shell."}, {"start": 50.16, "end": 55.52, "text": " We can also take a hairbrush without bristles and have an interesting simulation."}, {"start": 55.52, "end": 64.08, "text": " Or even better, add tiny bristles to its geometry and see how much more of a turbulent flow it creates."}, {"start": 64.08, "end": 67.76, "text": " I already love this paper. So good!"}, {"start": 67.76, "end": 70.80000000000001, "text": " Now onwards to more serious applications."}, {"start": 70.80000000000001, "end": 76.08, "text": " So, why simulate thin shells? What is all this useful for?"}, {"start": 76.08, "end": 79.36, "text": " Well, of course, simulating wind turbines."}, {"start": 79.36, "end": 83.28, "text": " That is an excellent application. We are getting there."}, {"start": 83.28, "end": 89.68, "text": " Hmm, talking about thin shells. What else? Of course, aircraft."}, {"start": 89.68, "end": 96.88, "text": " Now, hold on to your papers and marvel at these beautiful spiral vortices that this simulator can create."}, {"start": 96.88, "end": 103.28, "text": " So, is this all for show or is this what would really happen in reality?"}, {"start": 103.28, "end": 106.08, "text": " We will take a closer look at that in a moment."}, {"start": 106.08, "end": 113.12, "text": " With that, yes, the authors claim that this is much more accurate than previous methods in its"}, {"start": 113.12, "end": 120.16000000000001, "text": " class. Well done! Let's give it a devilishly difficult test, a real aerodynamic simulation"}, {"start": 120.16000000000001, "end": 126.4, "text": " in a wind tunnel. In these cases, getting really accurate results is critical."}, {"start": 126.4, "end": 132.08, "text": " For instance, here we would like to see that if we were to add a spoiler to this car,"}, {"start": 132.08, "end": 136.48000000000002, "text": " how much of an aerodynamic advantage we would get in return."}, {"start": 136.48000000000002, "end": 139.36, "text": " Here are the results from the real wind tunnel test."}, {"start": 139.36, "end": 145.36, "text": " And now, hold on to your papers and let's see how the new method compares."}, {"start": 145.36, "end": 153.68, "text": " Wow! Goodness! It is not perfect by any means, but seems accurate enough that we can see the"}, {"start": 153.68, "end": 158.88000000000002, "text": " weak flow of the car clearly enough so that we can make a decision on that spoiler."}, {"start": 158.88000000000002, "end": 165.68, "text": " You know what? Let's keep it. So, we compared the simulation against reality."}, {"start": 165.68, "end": 172.88, "text": " Now, let's compare a simulation against another simulation. So, against a previous method."}, {"start": 172.88, "end": 178.08, "text": " Yes, the new one is significantly more accurate and then its predecessor."}, {"start": 178.08, "end": 186.0, "text": " Why? Because the previous one introduces significant boundary layers separations at the top of the"}, {"start": 186.0, "end": 192.0, "text": " car. The new one says that this is what will happen in reality. So, how do we know?"}, {"start": 192.0, "end": 198.32, "text": " Of course, we check. Yes, that is indeed right. Absolutely amazing."}, {"start": 198.32, "end": 205.52, "text": " And note that this work appeared just about a year ago. So much improvement in so little time."}, {"start": 205.52, "end": 209.68, "text": " The pace of progress in computer graphics research is out of this world."}, {"start": 209.68, "end": 217.28, "text": " Okay, so it is accurate. Really accurate. But we already have accurate algorithms."}, {"start": 217.28, "end": 224.8, "text": " So, how fast is it? Well, the proper aerodynamic simulations take from days to weeks to compute,"}, {"start": 224.8, "end": 232.0, "text": " but this airplane simulation took only minutes. How many minutes? 60 minutes to be exact."}, {"start": 232.8, "end": 239.04, "text": " Wow! And within these 60 minutes, the same spiral vertices show up as the ones in the real"}, {"start": 239.04, "end": 247.28, "text": " wind tunnel tests. So good. But really, how can this be so fast? How is this even possible? The key"}, {"start": 247.28, "end": 253.68, "text": " to doing all this faster is that the new proposed method is massively parallel. This means that it"}, {"start": 253.68, "end": 260.88, "text": " can slice up one big computing task into many small ones that can be taken care of independently."}, {"start": 260.88, "end": 267.52, "text": " And as a result, it takes advantage of our graphics cards and can squeeze every drop of performance"}, {"start": 267.52, "end": 273.2, "text": " out of it. That is very challenging for an algorithm of this complexity. Of course,"}, {"start": 273.2, "end": 279.2, "text": " the final simulation should still be done with the old tools just to make sure. However,"}, {"start": 279.2, "end": 285.52, "text": " this new method will be able to help engineers quickly iterate on early ideas and only commit"}, {"start": 285.52, "end": 292.15999999999997, "text": " to weak long simulations when absolutely necessary. So, we can go from idea to conclusions in a"}, {"start": 292.16, "end": 299.20000000000005, "text": " matter of an hour, not in a matter of a week. This will be an amazing time saver so the engineers"}, {"start": 299.20000000000005, "end": 306.24, "text": " can try more designs. Testing more ideas leads to more knowledge that leads to better vehicles,"}, {"start": 306.24, "end": 313.52000000000004, "text": " better wind turbines. Loving it. And all this improvement in just one year. If I didn't see this"}, {"start": 313.52000000000004, "end": 319.6, "text": " with my own eyes, I could not believe this. What a time to be alive! This episode has been"}, {"start": 319.6, "end": 327.52000000000004, "text": " supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU"}, {"start": 327.52000000000004, "end": 336.16, "text": " Cloud. They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances. And hold on to your"}, {"start": 336.16, "end": 343.6, "text": " papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only"}, {"start": 343.6, "end": 352.32000000000005, "text": " Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech"}, {"start": 352.32000000000005, "end": 358.16, "text": " in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com"}, {"start": 358.16, "end": 364.08000000000004, "text": " slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for"}, {"start": 364.08000000000004, "end": 369.20000000000005, "text": " their long-standing support and for helping us make better videos for you. Thanks for watching"}, {"start": 369.2, "end": 376.8, "text": " and for your generous support. And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=0ISa3uubuac
Opening The First AI Hair Salon! 💇
❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The paper "SketchHairSalon: Deep Sketch-based Hair Image Synthesis" is available here: https://chufengxiao.github.io/SketchHairSalon/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to open an AI-powered hair salon. How cool is that? But how? Well, we recently looked at this related technique from Nvidia that is able to make a rough drawings come to life as beautiful photorealistic images. In goes a rough drawing and out comes an image of this quality. And it processes changes to the drawings with the speed of thought. You can even request a bunch of variations on the same theme and get them right away. So here is a crazy idea. How about doing this with hairstyles? Just draw them and the AI puts it onto the model and makes it look photorealistic. Well, that sounds like science fiction, so I will believe it when I see it. And this no method claims to be able to do exactly that, so let's see the pros first. This will go in and this will come out. How is that even possible? Well, all we need to do is once again produce a crude drawing that is just enough to convey our intentions. We can even control the color, the length of the hair locks, or even request a braid and all of these work pretty well. Another great aspect of this technique is that it works very rapidly so we can easily iterate over these hairstyles. And if we don't feel like our original idea will be the one, we can refine it for as long as we please. This AI tool works with the speed of thought. One of the best aspects of this work. You can see an example here. This does not look right yet. But this quick iteration also means that we can lengthen these braids in just a moment. And yes, better. So how does it perform compared to previous techniques? Well, let's have a look together. This is the input hairstyle and the mat telling the AI where we seek to place the hair. And let's see the previous methods. The first two techniques are very rough. This follow-up work is significantly better than the previous ones, but it is still well behind the true photograph for reference. This is much more realistic. So hold on to your papers and let's see the new technique. Now you're probably thinking, Karoi, what is going on? Why is the video not changing? Is this an error? Well, I have to tell you something. It's not an error. What you are looking at now is not the reference photo. No, no. This is the result of the new technique. The true reference photo is this one. That is a wonderful result. The new method understands not only the geometry of the hair better, but also how the hair should play with the illumination of the scene too. I am a light transport researcher by trade, and this makes me very, very happy. So much so that in some ways this technique looks more realistic than the true image. That is super cool. What a time to be alive. Now, not even this technique is perfect, even though the new method understands geometry better than its predecessors, we have to be careful with our edits because geometry problems still emerge. Look here. Also, the other issue is that the resolution of the generated hair should match the resolution of the underlying image. I feel that these are usually a bit more pixelated. Now note that these kinds of issues are what typically get improved from one paper to the next. So, you know the deal. A couple of papers down the line, and I am sure these edits will become nearly indistinguishable from reality. As you see, there are still issues. The results are often not even close to perfect, but the pace of progress in AI research is nothing short of amazing. With this we can kind of open an AI-powered hair salon, and some cases will work, but soon it might be that nearly all of our drawings were result in photorealistic results that are indistinguishable from reality. This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators, and more. Make sure to visit www.me-slash-paper-forum and say hi, or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 8.4, "text": " Today we are going to open an AI-powered hair salon."}, {"start": 8.4, "end": 10.4, "text": " How cool is that?"}, {"start": 10.4, "end": 12.200000000000001, "text": " But how?"}, {"start": 12.200000000000001, "end": 15.8, "text": " Well, we recently looked at this related technique from Nvidia"}, {"start": 15.8, "end": 21.8, "text": " that is able to make a rough drawings come to life as beautiful photorealistic images."}, {"start": 21.8, "end": 26.6, "text": " In goes a rough drawing and out comes an image of this quality."}, {"start": 26.6, "end": 31.0, "text": " And it processes changes to the drawings with the speed of thought."}, {"start": 31.0, "end": 37.2, "text": " You can even request a bunch of variations on the same theme and get them right away."}, {"start": 37.2, "end": 40.2, "text": " So here is a crazy idea."}, {"start": 40.2, "end": 43.2, "text": " How about doing this with hairstyles?"}, {"start": 43.2, "end": 49.8, "text": " Just draw them and the AI puts it onto the model and makes it look photorealistic."}, {"start": 49.8, "end": 54.8, "text": " Well, that sounds like science fiction, so I will believe it when I see it."}, {"start": 54.8, "end": 59.0, "text": " And this no method claims to be able to do exactly that,"}, {"start": 59.0, "end": 61.8, "text": " so let's see the pros first."}, {"start": 61.8, "end": 65.0, "text": " This will go in and this will come out."}, {"start": 65.0, "end": 67.2, "text": " How is that even possible?"}, {"start": 67.2, "end": 75.0, "text": " Well, all we need to do is once again produce a crude drawing that is just enough to convey our intentions."}, {"start": 75.0, "end": 79.2, "text": " We can even control the color, the length of the hair locks,"}, {"start": 79.2, "end": 84.0, "text": " or even request a braid and all of these work pretty well."}, {"start": 84.0, "end": 88.2, "text": " Another great aspect of this technique is that it works very rapidly"}, {"start": 88.2, "end": 91.0, "text": " so we can easily iterate over these hairstyles."}, {"start": 91.0, "end": 94.8, "text": " And if we don't feel like our original idea will be the one,"}, {"start": 94.8, "end": 97.8, "text": " we can refine it for as long as we please."}, {"start": 97.8, "end": 101.6, "text": " This AI tool works with the speed of thought."}, {"start": 101.6, "end": 103.8, "text": " One of the best aspects of this work."}, {"start": 103.8, "end": 105.4, "text": " You can see an example here."}, {"start": 105.4, "end": 108.0, "text": " This does not look right yet."}, {"start": 108.0, "end": 113.8, "text": " But this quick iteration also means that we can lengthen these braids in just a moment."}, {"start": 113.8, "end": 116.39999999999999, "text": " And yes, better."}, {"start": 116.39999999999999, "end": 119.8, "text": " So how does it perform compared to previous techniques?"}, {"start": 119.8, "end": 121.8, "text": " Well, let's have a look together."}, {"start": 121.8, "end": 127.6, "text": " This is the input hairstyle and the mat telling the AI where we seek to place the hair."}, {"start": 127.6, "end": 130.0, "text": " And let's see the previous methods."}, {"start": 130.0, "end": 133.6, "text": " The first two techniques are very rough."}, {"start": 133.6, "end": 138.2, "text": " This follow-up work is significantly better than the previous ones,"}, {"start": 138.2, "end": 142.6, "text": " but it is still well behind the true photograph for reference."}, {"start": 142.6, "end": 145.2, "text": " This is much more realistic."}, {"start": 145.2, "end": 150.2, "text": " So hold on to your papers and let's see the new technique."}, {"start": 150.2, "end": 154.0, "text": " Now you're probably thinking, Karoi, what is going on?"}, {"start": 154.0, "end": 156.4, "text": " Why is the video not changing?"}, {"start": 156.4, "end": 157.79999999999998, "text": " Is this an error?"}, {"start": 157.79999999999998, "end": 160.0, "text": " Well, I have to tell you something."}, {"start": 160.0, "end": 161.2, "text": " It's not an error."}, {"start": 161.2, "end": 164.4, "text": " What you are looking at now is not the reference photo."}, {"start": 164.4, "end": 165.2, "text": " No, no."}, {"start": 165.2, "end": 168.2, "text": " This is the result of the new technique."}, {"start": 168.2, "end": 171.0, "text": " The true reference photo is this one."}, {"start": 171.0, "end": 173.2, "text": " That is a wonderful result."}, {"start": 173.2, "end": 177.8, "text": " The new method understands not only the geometry of the hair better,"}, {"start": 177.8, "end": 182.8, "text": " but also how the hair should play with the illumination of the scene too."}, {"start": 182.8, "end": 188.0, "text": " I am a light transport researcher by trade, and this makes me very, very happy."}, {"start": 188.0, "end": 195.0, "text": " So much so that in some ways this technique looks more realistic than the true image."}, {"start": 195.0, "end": 197.0, "text": " That is super cool."}, {"start": 197.0, "end": 198.8, "text": " What a time to be alive."}, {"start": 198.8, "end": 204.4, "text": " Now, not even this technique is perfect, even though the new method understands geometry"}, {"start": 204.4, "end": 208.4, "text": " better than its predecessors, we have to be careful with our edits"}, {"start": 208.4, "end": 211.4, "text": " because geometry problems still emerge."}, {"start": 211.4, "end": 212.60000000000002, "text": " Look here."}, {"start": 212.60000000000002, "end": 218.60000000000002, "text": " Also, the other issue is that the resolution of the generated hair should match the resolution"}, {"start": 218.60000000000002, "end": 220.4, "text": " of the underlying image."}, {"start": 220.4, "end": 224.0, "text": " I feel that these are usually a bit more pixelated."}, {"start": 224.0, "end": 229.8, "text": " Now note that these kinds of issues are what typically get improved from one paper to the next."}, {"start": 229.8, "end": 231.8, "text": " So, you know the deal."}, {"start": 231.8, "end": 238.6, "text": " A couple of papers down the line, and I am sure these edits will become nearly indistinguishable from reality."}, {"start": 238.6, "end": 240.8, "text": " As you see, there are still issues."}, {"start": 240.8, "end": 243.8, "text": " The results are often not even close to perfect,"}, {"start": 243.8, "end": 248.4, "text": " but the pace of progress in AI research is nothing short of amazing."}, {"start": 248.4, "end": 255.4, "text": " With this we can kind of open an AI-powered hair salon, and some cases will work,"}, {"start": 255.4, "end": 261.8, "text": " but soon it might be that nearly all of our drawings were result in photorealistic results"}, {"start": 261.8, "end": 264.6, "text": " that are indistinguishable from reality."}, {"start": 264.6, "end": 268.4, "text": " This video has been supported by weights and biases."}, {"start": 268.4, "end": 269.4, "text": " Look at this."}, {"start": 269.4, "end": 276.2, "text": " They have a great community forum that aims to make you the best machine learning engineer you can be."}, {"start": 276.2, "end": 283.0, "text": " You see, I always get messages from you fellow scholars telling me that you have been inspired by the series,"}, {"start": 283.0, "end": 286.0, "text": " but don't really know where to start."}, {"start": 286.0, "end": 287.59999999999997, "text": " And here it is."}, {"start": 287.59999999999997, "end": 294.2, "text": " In this forum, you can share your projects, ask for advice, look for collaborators, and more."}, {"start": 294.2, "end": 303.4, "text": " Make sure to visit www.me-slash-paper-forum and say hi, or just click the link in the video description."}, {"start": 303.4, "end": 310.0, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better videos for you."}, {"start": 310.0, "end": 339.8, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=14tNq-fqTmQ
NCsoft’s New AI: The Ultimate Stuntman! 🏋
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "Learning Time-Critical Responses for Interactive Character Control" is available here: https://mrl.snu.ac.kr/research/ProjectAgile/Agile.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today, we are going to take a bunch of completely unorganized human motion data, give it to an AI, grab a controller, and get an amazing video game out of it, which we can play almost immediately. Why almost immediately? I'll tell you in a moment. So, how does this process work? There are similar previous techniques that took a big soup of motion capture data, and outbade each other on what they could learn from it. And they did it really well. For instance, one of these AI's was able to not only learn from these movements, but even improve them, and even better adapt them to different kinds of terrains. This other work used a small training set of general movements to reinvent a popular high-jump technique, the Fosbury flop by itself. This allows the athlete to jump backward over the bar, thus lowering their center of gravity. And it could also do it on Mars. So cool. In the meantime, Ubisoft also learned to not only simulate, but even predict the future motion of video game characters, thus speeding up this process. You can see here how well its predictions line up with the real reference footage. And it could also stand its ground when previous methods failed. Ouch. So, are we done here? Is there nothing else to do? Well, of course, there is. Here, in goes this soup of unorganized motion data, which is given to train a deep neural network, and then this happens. Yes, the AI learned how to weave together these motions so well that we can even grab a controller and start playing immediately. Or almost immediately. And with that, here comes the problem. Do you see it? There is still a bit of a delay between the button press and the motion. A simple way of alleviating that would be to speed up the underlying animations. Well, that's not going to be it because this way the motions lose their realism. Not good. So, can we do something about this? Well, hold onto your papers and have a look at how this new method addresses it. We press the button and yes, now that is what I call quick and fluid motion. Yes, this new AI promises to learn time critical responses to our commands. What does that mean? Well, see the white bar here. By this amount of time, the motions have to finish and the blue is where we are currently. So, the time critical part means that it promises that the blue bar will never exceed the white bar. This is wonderful, just as the gamer does. Has it happened to you that you already pressed the button, but the action didn't execute in time. They will say that it happens all the time. But with this, we can perform a series of slow U turns and then progressively decrease the amount of time that we give to the character and see how much more agile it becomes. Absolutely amazing. The motions really changed and I find all of them quite realistic. Maybe this could be connected to a game mechanic where the super quick time critical actions deplete the stamina of our character quicker, so we have to use them sparingly. But that's not all it can do, not even close. You can even chain many of these crazy actions together and as you see our character does these amazing motions that look like they came straight out of the Witcher series. Loving it. What you see here is a teacher network that learns to efficiently pull off these moves and then we fire up a student neural network that seeks to become as proficient as the teacher, but with a smaller and more compact neural network. This is what we call policy distillation. So I hear you asking, is the student as good as its teacher? Let's have one more look at the teacher and the student, they are very close. Actually, wait a second, did you see it? The student is actually even more responsive than his teacher was. This example showcases it more clearly. It can even complete this slalom course and we might even be able to make a parkour game with it. And here comes the best part. The required training data is not a few days, not even a few hours, only a few minutes. And the training time is in the order of just a few hours and this we only have to do once and then we are free to use the train neural network for as long as we please. Now, one more really cool tidbit in this work is that this training data mustn't be realistic, it can be a soup of highly stylized motions and the new technique can steal weave them together really well. Is it possible that, yes, my goodness, it is possible. The input motions don't even have to come from humans, they can come from quadrupeds too. This talk only five minutes of motion capture data and the horse AI became able to transition between these movement types. This is yet another amazing tool in democratizing character animation. Absolutely amazing. What a time to be alive. Wates and biases provide tools to track your experiments in your deep learning projects. What you see here is their tables feature and the best part about it is that it is not only able to handle pretty much any kind of data you can throw at it, but it also presents your experiments to you in a way that is easy to understand. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that Wates and Biasis is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wates and Biasis for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.64, "end": 11.44, "text": " Today, we are going to take a bunch of completely unorganized human motion data, give it to an AI,"}, {"start": 11.44, "end": 18.72, "text": " grab a controller, and get an amazing video game out of it, which we can play almost immediately."}, {"start": 18.72, "end": 22.48, "text": " Why almost immediately? I'll tell you in a moment."}, {"start": 22.48, "end": 29.92, "text": " So, how does this process work? There are similar previous techniques that took a big soup of motion capture data,"}, {"start": 29.92, "end": 33.92, "text": " and outbade each other on what they could learn from it."}, {"start": 33.92, "end": 40.72, "text": " And they did it really well. For instance, one of these AI's was able to not only learn from these movements,"}, {"start": 40.72, "end": 47.28, "text": " but even improve them, and even better adapt them to different kinds of terrains."}, {"start": 47.28, "end": 53.760000000000005, "text": " This other work used a small training set of general movements to reinvent a popular high-jump technique,"}, {"start": 53.760000000000005, "end": 60.08, "text": " the Fosbury flop by itself. This allows the athlete to jump backward over the bar,"}, {"start": 60.08, "end": 62.400000000000006, "text": " thus lowering their center of gravity."}, {"start": 63.760000000000005, "end": 66.0, "text": " And it could also do it on Mars."}, {"start": 67.84, "end": 72.96000000000001, "text": " So cool. In the meantime, Ubisoft also learned to not only simulate,"}, {"start": 72.96, "end": 79.52, "text": " but even predict the future motion of video game characters, thus speeding up this process."}, {"start": 79.52, "end": 84.55999999999999, "text": " You can see here how well its predictions line up with the real reference footage."}, {"start": 86.39999999999999, "end": 90.16, "text": " And it could also stand its ground when previous methods failed."}, {"start": 91.28, "end": 97.75999999999999, "text": " Ouch. So, are we done here? Is there nothing else to do? Well, of course, there is."}, {"start": 97.76, "end": 105.04, "text": " Here, in goes this soup of unorganized motion data, which is given to train a deep neural network,"}, {"start": 105.04, "end": 112.56, "text": " and then this happens. Yes, the AI learned how to weave together these motions so well that we"}, {"start": 112.56, "end": 120.16000000000001, "text": " can even grab a controller and start playing immediately. Or almost immediately. And with that,"}, {"start": 120.16000000000001, "end": 122.72, "text": " here comes the problem. Do you see it?"}, {"start": 122.72, "end": 128.64, "text": " There is still a bit of a delay between the button press and the motion."}, {"start": 129.2, "end": 134.48, "text": " A simple way of alleviating that would be to speed up the underlying animations."}, {"start": 135.28, "end": 140.16, "text": " Well, that's not going to be it because this way the motions lose their realism."}, {"start": 140.88, "end": 148.0, "text": " Not good. So, can we do something about this? Well, hold onto your papers and have a look"}, {"start": 148.0, "end": 155.84, "text": " at how this new method addresses it. We press the button and yes, now that is what I call quick"}, {"start": 155.84, "end": 163.2, "text": " and fluid motion. Yes, this new AI promises to learn time critical responses to our commands."}, {"start": 163.68, "end": 169.76, "text": " What does that mean? Well, see the white bar here. By this amount of time, the motions have to"}, {"start": 169.76, "end": 176.08, "text": " finish and the blue is where we are currently. So, the time critical part means that it promises"}, {"start": 176.08, "end": 182.48000000000002, "text": " that the blue bar will never exceed the white bar. This is wonderful, just as the gamer"}, {"start": 182.48000000000002, "end": 188.8, "text": " does. Has it happened to you that you already pressed the button, but the action didn't execute"}, {"start": 188.8, "end": 195.68, "text": " in time. They will say that it happens all the time. But with this, we can perform a series"}, {"start": 195.68, "end": 202.0, "text": " of slow U turns and then progressively decrease the amount of time that we give to the character"}, {"start": 202.0, "end": 209.44, "text": " and see how much more agile it becomes. Absolutely amazing. The motions really changed and I find"}, {"start": 209.44, "end": 215.44, "text": " all of them quite realistic. Maybe this could be connected to a game mechanic where the super"}, {"start": 215.44, "end": 221.84, "text": " quick time critical actions deplete the stamina of our character quicker, so we have to use them"}, {"start": 221.84, "end": 228.88, "text": " sparingly. But that's not all it can do, not even close. You can even chain many of these crazy"}, {"start": 228.88, "end": 235.04, "text": " actions together and as you see our character does these amazing motions that look like they"}, {"start": 235.04, "end": 241.35999999999999, "text": " came straight out of the Witcher series. Loving it. What you see here is a teacher network"}, {"start": 241.35999999999999, "end": 248.24, "text": " that learns to efficiently pull off these moves and then we fire up a student neural network that"}, {"start": 248.24, "end": 254.96, "text": " seeks to become as proficient as the teacher, but with a smaller and more compact neural network."}, {"start": 254.96, "end": 262.32, "text": " This is what we call policy distillation. So I hear you asking, is the student as good as its teacher?"}, {"start": 263.04, "end": 271.12, "text": " Let's have one more look at the teacher and the student, they are very close. Actually, wait a"}, {"start": 271.12, "end": 277.92, "text": " second, did you see it? The student is actually even more responsive than his teacher was. This"}, {"start": 277.92, "end": 285.6, "text": " example showcases it more clearly. It can even complete this slalom course and we might even be able"}, {"start": 285.6, "end": 292.0, "text": " to make a parkour game with it. And here comes the best part. The required training data is not"}, {"start": 292.0, "end": 299.12, "text": " a few days, not even a few hours, only a few minutes. And the training time is in the order of"}, {"start": 299.12, "end": 305.36, "text": " just a few hours and this we only have to do once and then we are free to use the train neural"}, {"start": 305.36, "end": 312.0, "text": " network for as long as we please. Now, one more really cool tidbit in this work is that this training"}, {"start": 312.0, "end": 318.8, "text": " data mustn't be realistic, it can be a soup of highly stylized motions and the new technique"}, {"start": 318.8, "end": 328.56, "text": " can steal weave them together really well. Is it possible that, yes, my goodness, it is possible."}, {"start": 328.56, "end": 334.16, "text": " The input motions don't even have to come from humans, they can come from quadrupeds too."}, {"start": 334.16, "end": 340.8, "text": " This talk only five minutes of motion capture data and the horse AI became able to transition"}, {"start": 340.8, "end": 348.08000000000004, "text": " between these movement types. This is yet another amazing tool in democratizing character animation."}, {"start": 348.08000000000004, "end": 354.96000000000004, "text": " Absolutely amazing. What a time to be alive. Wates and biases provide tools to track your experiments"}, {"start": 354.96000000000004, "end": 360.72, "text": " in your deep learning projects. What you see here is their tables feature and the best part about"}, {"start": 360.72, "end": 366.96000000000004, "text": " it is that it is not only able to handle pretty much any kind of data you can throw at it,"}, {"start": 366.96000000000004, "end": 373.20000000000005, "text": " but it also presents your experiments to you in a way that is easy to understand."}, {"start": 373.20000000000005, "end": 379.92, "text": " It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more."}, {"start": 379.92, "end": 386.96000000000004, "text": " And the best part is that Wates and Biasis is free for all individuals, academics and open source"}, {"start": 386.96, "end": 394.32, "text": " projects. Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 394.32, "end": 400.32, "text": " description and you can get a free demo today. Our thanks to Wates and Biasis for their long"}, {"start": 400.32, "end": 405.84, "text": " standing support and for helping us make better videos for you. Thanks for watching and for your"}, {"start": 405.84, "end": 418.79999999999995, "text": " generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Wbid5rvCGos
NVIDIA’s New AI Draws Images With The Speed of Thought! ⚡
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers Online demo - http://gaugan.org/gaugan2/ NVIDIA Canvas - https://www.nvidia.com/en-us/studio/canvas/ 📝 The previous paper "Semantic Image Synthesis with Spatially-Adaptive Normalization" is available here: https://nvlabs.github.io/SPADE/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia #gaugan #gaugan2
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Welcome to Episode 600. And today you will see your own drawings come to life as beautiful photorealistic images. And it turns out you can try it too. I'll tell you about it in a moment. This technique is called Gauguin II, and yes, this is really happening. In goes a rough drawing and out comes an image of this quality. That is incredible. But here there is something that is even more incredible. What is it? Well, drawing is an iterative process. But once we are committed to an idea, we need to refine it over and over, which takes quite a bit of time. And let's be honest here, sometimes things come out differently than we may have imagined. But this, this is different. Here you can change things as quickly as you can think of the change. You can even request a bunch of variations on the same theme and get them right away. But that's not all, not even close. Get this, with this you can draw even without drawing. Yes, really. How is that even possible? Well, if we don't feel like drawing, instead we can just type what we wish to see. And my goodness, it not only generates these images according to the written description, but this description can get pretty elaborate. For instance, we can get ocean waves. That's great. But now, let's add some rocks. And a beach too. And there we go. We can also use an image as a starting point, then just delete the undesirable parts and have it impainted by the algorithm. Now, okay, this is nothing new. Computer graphics researchers were able to do this for more than 10 years now. But hold on to your papers because they couldn't do this. Yes, we can fill in these gaps with a written description. Couldn't witness the Northern Lights in person. No worries, here you go. And wait a second, did you see that? There are two really cool things to see here. Thing number one, it even redraws the reflections on the water, even if we haven't highlighted that part for impainting. We don't need to say anything and it will update the whole environment to reflect the new changes by itself. That is amazing. Now, I am a light transport researcher by trade and this makes me very, very happy. Thing number two, I don't know if you call this, but this is so fast, it doesn't even wait for your full request, it updates after every single keystroke. Drawing is an inherently iterative process and iterating with this is an absolute breeze. Not will be a breeze, it is a breeze. Now, after nearly every two minute paper episode where we showcase an amazing paper, I get a question saying something like, okay, but when do I get to see or use this in the real world? And rightfully so, that is a good question. The previous Gauguan paper was published in 2019 and here we are, just a bit more than two years later and it has been transferred into a real product. Not only that, but the resolution has improved a great deal, about four times of what it was before, plus the new version also supports more materials. And we are at the point where this is finally not just a cool demo, but a tool that is useful for real artists. What a time to be alive. Now I noted earlier, I did not say that iterating with this will be a breeze, but it is a breeze. Why? Well, great news because you can try it right now in two different ways. One, it is not part of a desktop application called Nvidia Canvas. With this, you can even export the layers to Photoshop and continue your work there. This will require a relatively recent Nvidia graphics card. And two, there is a web app too that you can try right now. The link is available in the video description and if you try it, please scroll down and make sure to read the instructions and watch the tutorial video to not get lost. And remember, all this tech transfer from paper to product took place in a matter of two years. Bravo Nvidia! The pace of progress in AI research is absolutely amazing. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping. Or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 7.6000000000000005, "text": " Welcome to Episode 600."}, {"start": 7.6000000000000005, "end": 14.6, "text": " And today you will see your own drawings come to life as beautiful photorealistic images."}, {"start": 14.6, "end": 17.6, "text": " And it turns out you can try it too."}, {"start": 17.6, "end": 19.6, "text": " I'll tell you about it in a moment."}, {"start": 19.6, "end": 25.6, "text": " This technique is called Gauguin II, and yes, this is really happening."}, {"start": 25.6, "end": 30.400000000000002, "text": " In goes a rough drawing and out comes an image of this quality."}, {"start": 30.400000000000002, "end": 32.2, "text": " That is incredible."}, {"start": 32.2, "end": 35.800000000000004, "text": " But here there is something that is even more incredible."}, {"start": 35.800000000000004, "end": 36.800000000000004, "text": " What is it?"}, {"start": 36.800000000000004, "end": 40.2, "text": " Well, drawing is an iterative process."}, {"start": 40.2, "end": 45.2, "text": " But once we are committed to an idea, we need to refine it over and over,"}, {"start": 45.2, "end": 47.2, "text": " which takes quite a bit of time."}, {"start": 47.2, "end": 54.0, "text": " And let's be honest here, sometimes things come out differently than we may have imagined."}, {"start": 54.0, "end": 56.8, "text": " But this, this is different."}, {"start": 56.8, "end": 61.6, "text": " Here you can change things as quickly as you can think of the change."}, {"start": 61.6, "end": 68.2, "text": " You can even request a bunch of variations on the same theme and get them right away."}, {"start": 68.2, "end": 70.8, "text": " But that's not all, not even close."}, {"start": 70.8, "end": 75.4, "text": " Get this, with this you can draw even without drawing."}, {"start": 75.4, "end": 77.0, "text": " Yes, really."}, {"start": 77.0, "end": 79.2, "text": " How is that even possible?"}, {"start": 79.2, "end": 84.8, "text": " Well, if we don't feel like drawing, instead we can just type what we wish to see."}, {"start": 84.8, "end": 91.4, "text": " And my goodness, it not only generates these images according to the written description,"}, {"start": 91.4, "end": 94.8, "text": " but this description can get pretty elaborate."}, {"start": 94.8, "end": 97.4, "text": " For instance, we can get ocean waves."}, {"start": 97.4, "end": 98.4, "text": " That's great."}, {"start": 98.4, "end": 102.0, "text": " But now, let's add some rocks."}, {"start": 102.0, "end": 103.80000000000001, "text": " And a beach too."}, {"start": 103.80000000000001, "end": 106.0, "text": " And there we go."}, {"start": 106.0, "end": 112.72, "text": " We can also use an image as a starting point, then just delete the undesirable parts and"}, {"start": 112.72, "end": 115.8, "text": " have it impainted by the algorithm."}, {"start": 115.8, "end": 118.6, "text": " Now, okay, this is nothing new."}, {"start": 118.6, "end": 123.2, "text": " Computer graphics researchers were able to do this for more than 10 years now."}, {"start": 123.2, "end": 127.6, "text": " But hold on to your papers because they couldn't do this."}, {"start": 127.6, "end": 131.8, "text": " Yes, we can fill in these gaps with a written description."}, {"start": 131.8, "end": 135.0, "text": " Couldn't witness the Northern Lights in person."}, {"start": 135.0, "end": 137.72, "text": " No worries, here you go."}, {"start": 137.72, "end": 140.72, "text": " And wait a second, did you see that?"}, {"start": 140.72, "end": 143.8, "text": " There are two really cool things to see here."}, {"start": 143.8, "end": 150.28, "text": " Thing number one, it even redraws the reflections on the water, even if we haven't highlighted"}, {"start": 150.28, "end": 152.0, "text": " that part for impainting."}, {"start": 152.0, "end": 157.64, "text": " We don't need to say anything and it will update the whole environment to reflect the"}, {"start": 157.64, "end": 160.32, "text": " new changes by itself."}, {"start": 160.32, "end": 161.84, "text": " That is amazing."}, {"start": 161.84, "end": 168.20000000000002, "text": " Now, I am a light transport researcher by trade and this makes me very, very happy."}, {"start": 168.20000000000002, "end": 174.08, "text": " Thing number two, I don't know if you call this, but this is so fast, it doesn't even"}, {"start": 174.08, "end": 179.8, "text": " wait for your full request, it updates after every single keystroke."}, {"start": 179.8, "end": 186.0, "text": " Drawing is an inherently iterative process and iterating with this is an absolute breeze."}, {"start": 186.0, "end": 189.24, "text": " Not will be a breeze, it is a breeze."}, {"start": 189.24, "end": 194.64000000000001, "text": " Now, after nearly every two minute paper episode where we showcase an amazing paper, I get"}, {"start": 194.64000000000001, "end": 201.20000000000002, "text": " a question saying something like, okay, but when do I get to see or use this in the real"}, {"start": 201.20000000000002, "end": 202.36, "text": " world?"}, {"start": 202.36, "end": 205.48000000000002, "text": " And rightfully so, that is a good question."}, {"start": 205.48000000000002, "end": 211.92000000000002, "text": " The previous Gauguan paper was published in 2019 and here we are, just a bit more than"}, {"start": 211.92000000000002, "end": 216.52, "text": " two years later and it has been transferred into a real product."}, {"start": 216.52, "end": 222.36, "text": " Not only that, but the resolution has improved a great deal, about four times of what it was"}, {"start": 222.36, "end": 227.16000000000003, "text": " before, plus the new version also supports more materials."}, {"start": 227.16000000000003, "end": 233.24, "text": " And we are at the point where this is finally not just a cool demo, but a tool that is useful"}, {"start": 233.24, "end": 235.12, "text": " for real artists."}, {"start": 235.12, "end": 236.88, "text": " What a time to be alive."}, {"start": 236.88, "end": 244.0, "text": " Now I noted earlier, I did not say that iterating with this will be a breeze, but it is a breeze."}, {"start": 244.0, "end": 245.0, "text": " Why?"}, {"start": 245.0, "end": 251.16, "text": " Well, great news because you can try it right now in two different ways."}, {"start": 251.16, "end": 255.76, "text": " One, it is not part of a desktop application called Nvidia Canvas."}, {"start": 255.76, "end": 261.32, "text": " With this, you can even export the layers to Photoshop and continue your work there."}, {"start": 261.32, "end": 265.44, "text": " This will require a relatively recent Nvidia graphics card."}, {"start": 265.44, "end": 269.56, "text": " And two, there is a web app too that you can try right now."}, {"start": 269.56, "end": 275.28000000000003, "text": " The link is available in the video description and if you try it, please scroll down and"}, {"start": 275.28000000000003, "end": 280.52, "text": " make sure to read the instructions and watch the tutorial video to not get lost."}, {"start": 280.52, "end": 286.72, "text": " And remember, all this tech transfer from paper to product took place in a matter of two"}, {"start": 286.72, "end": 287.72, "text": " years."}, {"start": 287.72, "end": 289.16, "text": " Bravo Nvidia!"}, {"start": 289.16, "end": 293.4, "text": " The pace of progress in AI research is absolutely amazing."}, {"start": 293.4, "end": 296.84000000000003, "text": " This episode has been supported by CoHear AI."}, {"start": 296.84, "end": 302.67999999999995, "text": " CoHear builds large language models and makes them available through an API so businesses"}, {"start": 302.67999999999995, "end": 309.44, "text": " can add advanced language understanding to their system or app quickly with just one line"}, {"start": 309.44, "end": 310.67999999999995, "text": " of code."}, {"start": 310.67999999999995, "end": 316.4, "text": " You can use your own data, whether it's text from customer service requests, legal contracts,"}, {"start": 316.4, "end": 324.52, "text": " or social media posts to create your own custom models to understand text or even generated."}, {"start": 324.52, "end": 329.68, "text": " For instance, it can be used to automatically determine whether your messages are about"}, {"start": 329.68, "end": 333.96, "text": " your business hours, returns, or shipping."}, {"start": 333.96, "end": 340.71999999999997, "text": " Or it can be used to generate a list of possible sentences you can use for your product descriptions."}, {"start": 340.71999999999997, "end": 346.64, "text": " Make sure to go to CoHear.ai slash papers or click the link in the video description"}, {"start": 346.64, "end": 348.96, "text": " and give it a try today."}, {"start": 348.96, "end": 350.4, "text": " It's super easy to use."}, {"start": 350.4, "end": 354.28, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mY2ozPHn0w4
New Weather Simulator: Almost Perfect! 🌤
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Weatherscapes: Nowcasting Heat Transfer and Water Continuity" is available here: http://computationalsciences.org/publications/amador-herrera-2021-weatherscapes.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fahir. You can immediately start holding onto your papers, because today we are going to take a journey around the globe and simulate the weather over the mountainous coast of Yucatan, a beautiful day in the Swiss Alps, and a searing day in the Sahara Desert. Yes, from now on, all of this is possible through the power of this new paper. Just to showcase how quickly things are improving in the wondrous land of computer graphics, first we were able to simulate a tree. And then researchers learned to simulate an entire forest, but that's not all. Then, Morrison papers explore simulating an entire ecosystem for thousands of years. Now that's something. So, are we done here? What else is there to do? Well, this new work finally helps us simulating waterscapes, and it can go from a light rain to a huge snowstorm. Now, previous methods were also able to do something like this, but something was missing. For instance, this looks great, but the buoyancy part is just not there. Why? Well, let's look at the new method and find out together. Yes, the new work simulates the microphysics of water, which helps modeling phase changes and buoyancy properly. You are seeing the buoyancy part here. That makes a great difference. So, what about phase changes? Well, look at this simulation. This is also missing something. What is missing? Well, the simulation of the phone effect. What this means is a phenomenon where condensed warm air approaches one side of the mountain and something happens. Or, to be more exact, something doesn't happen. In reality, this should not get through. So, let's see the new simulation. Oh, yes, it is indeed stuck. Now, that's what I call a proper simulation of heat transfer and phase changes. Bravo! And here, you also see another example of a proper phase change simulation where the rainwater flows down the mountain to the ground and the new system simulates how it evaporates. Wait for it. Yes, it starts to form new clouds. So good. And it can simulate many more amazing natural phenomena beyond these in a way that resembles reality quite well. My favorites here are the Mammothas clouds, a phenomenon where the air descends and warms, creating differences in temperature in the cloud. And this is what the instability it creates looks like. And, oh my, the whole punch. This can emerge due to air cooling rapidly, often due to a passing aircraft. All of these phenomena require the simulations of the microphysics of water. So, microphysics, huh? That sounds computationally expensive. So, let's pop the question. How long do we have to wait for a simulation like this? Whoa! Are you seeing what I am seeing? How is that even possible? This runs easily, interactively, with approximately 10 frames per second. It simulates this and this. And all of these variables, and it can do all these 10 times every second. That is insanity. Sinvia. Now, one more thing. Views are not everything. Not even close. However, I couldn't ignore the fact that as of the making of this video, only approximately 500 people watched the original paper video. I think this is such a beautiful work. Everyone has to see it. Before two-minute papers, I was just running around at the university with these papers in my hand trying to show them to the people I know. And today, this is why two-minute papers exist, and it can only exist with such an amazing and receptive audience as you are. So, thank you so much for coming with us on this glorious journey, and make sure to subscribe and hit the bell icon. If you wish to see more outstanding works like this, we have a great deal more coming up for you. Percepti Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com, slash papers, and start using their system for free today. Our thanks to perceptilabs for their support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fahir."}, {"start": 4.6000000000000005, "end": 11.84, "text": " You can immediately start holding onto your papers, because today we are going to take a journey around the globe"}, {"start": 11.84, "end": 18.72, "text": " and simulate the weather over the mountainous coast of Yucatan, a beautiful day in the Swiss Alps,"}, {"start": 18.72, "end": 21.92, "text": " and a searing day in the Sahara Desert."}, {"start": 21.92, "end": 27.36, "text": " Yes, from now on, all of this is possible through the power of this new paper."}, {"start": 27.36, "end": 33.44, "text": " Just to showcase how quickly things are improving in the wondrous land of computer graphics,"}, {"start": 33.44, "end": 37.28, "text": " first we were able to simulate a tree."}, {"start": 37.28, "end": 43.6, "text": " And then researchers learned to simulate an entire forest, but that's not all."}, {"start": 43.6, "end": 51.120000000000005, "text": " Then, Morrison papers explore simulating an entire ecosystem for thousands of years."}, {"start": 51.120000000000005, "end": 52.72, "text": " Now that's something."}, {"start": 52.72, "end": 56.0, "text": " So, are we done here? What else is there to do?"}, {"start": 56.0, "end": 60.88, "text": " Well, this new work finally helps us simulating waterscapes,"}, {"start": 60.88, "end": 65.6, "text": " and it can go from a light rain to a huge snowstorm."}, {"start": 65.6, "end": 72.96000000000001, "text": " Now, previous methods were also able to do something like this, but something was missing."}, {"start": 72.96000000000001, "end": 78.72, "text": " For instance, this looks great, but the buoyancy part is just not there."}, {"start": 78.72, "end": 79.68, "text": " Why?"}, {"start": 79.68, "end": 84.56, "text": " Well, let's look at the new method and find out together."}, {"start": 84.56, "end": 88.48, "text": " Yes, the new work simulates the microphysics of water,"}, {"start": 88.48, "end": 93.44, "text": " which helps modeling phase changes and buoyancy properly."}, {"start": 93.44, "end": 96.08, "text": " You are seeing the buoyancy part here."}, {"start": 96.08, "end": 98.4, "text": " That makes a great difference."}, {"start": 98.4, "end": 100.80000000000001, "text": " So, what about phase changes?"}, {"start": 100.80000000000001, "end": 103.2, "text": " Well, look at this simulation."}, {"start": 103.2, "end": 105.84, "text": " This is also missing something."}, {"start": 105.84, "end": 107.2, "text": " What is missing?"}, {"start": 107.2, "end": 110.16, "text": " Well, the simulation of the phone effect."}, {"start": 110.16, "end": 116.39999999999999, "text": " What this means is a phenomenon where condensed warm air approaches one side of the mountain"}, {"start": 116.39999999999999, "end": 118.56, "text": " and something happens."}, {"start": 118.56, "end": 122.0, "text": " Or, to be more exact, something doesn't happen."}, {"start": 122.0, "end": 125.2, "text": " In reality, this should not get through."}, {"start": 125.2, "end": 127.28, "text": " So, let's see the new simulation."}, {"start": 128.24, "end": 131.2, "text": " Oh, yes, it is indeed stuck."}, {"start": 131.2, "end": 136.64, "text": " Now, that's what I call a proper simulation of heat transfer and phase changes."}, {"start": 136.64, "end": 142.39999999999998, "text": " Bravo! And here, you also see another example of a proper phase change simulation"}, {"start": 142.39999999999998, "end": 149.6, "text": " where the rainwater flows down the mountain to the ground and the new system simulates how it evaporates."}, {"start": 150.23999999999998, "end": 150.88, "text": " Wait for it."}, {"start": 152.16, "end": 155.04, "text": " Yes, it starts to form new clouds."}, {"start": 155.76, "end": 156.39999999999998, "text": " So good."}, {"start": 157.2, "end": 163.04, "text": " And it can simulate many more amazing natural phenomena beyond these in a way that"}, {"start": 163.04, "end": 165.35999999999999, "text": " resembles reality quite well."}, {"start": 165.36, "end": 171.44000000000003, "text": " My favorites here are the Mammothas clouds, a phenomenon where the air descends and warms,"}, {"start": 171.44000000000003, "end": 174.4, "text": " creating differences in temperature in the cloud."}, {"start": 174.4, "end": 177.92000000000002, "text": " And this is what the instability it creates looks like."}, {"start": 178.56, "end": 181.44000000000003, "text": " And, oh my, the whole punch."}, {"start": 181.44000000000003, "end": 187.20000000000002, "text": " This can emerge due to air cooling rapidly, often due to a passing aircraft."}, {"start": 187.20000000000002, "end": 192.08, "text": " All of these phenomena require the simulations of the microphysics of water."}, {"start": 192.08, "end": 197.20000000000002, "text": " So, microphysics, huh? That sounds computationally expensive."}, {"start": 197.20000000000002, "end": 198.96, "text": " So, let's pop the question."}, {"start": 198.96, "end": 202.48000000000002, "text": " How long do we have to wait for a simulation like this?"}, {"start": 202.48000000000002, "end": 206.24, "text": " Whoa! Are you seeing what I am seeing?"}, {"start": 206.24, "end": 208.48000000000002, "text": " How is that even possible?"}, {"start": 208.48000000000002, "end": 214.16000000000003, "text": " This runs easily, interactively, with approximately 10 frames per second."}, {"start": 214.16000000000003, "end": 217.44, "text": " It simulates this and this."}, {"start": 217.44, "end": 224.32, "text": " And all of these variables, and it can do all these 10 times every second."}, {"start": 224.32, "end": 226.32, "text": " That is insanity."}, {"start": 226.32, "end": 227.92, "text": " Sinvia."}, {"start": 227.92, "end": 229.52, "text": " Now, one more thing."}, {"start": 229.52, "end": 231.6, "text": " Views are not everything."}, {"start": 231.6, "end": 233.12, "text": " Not even close."}, {"start": 233.12, "end": 237.44, "text": " However, I couldn't ignore the fact that as of the making of this video,"}, {"start": 237.44, "end": 242.64, "text": " only approximately 500 people watched the original paper video."}, {"start": 242.64, "end": 245.44, "text": " I think this is such a beautiful work."}, {"start": 245.44, "end": 247.04, "text": " Everyone has to see it."}, {"start": 247.04, "end": 251.2, "text": " Before two-minute papers, I was just running around at the university"}, {"start": 251.2, "end": 255.12, "text": " with these papers in my hand trying to show them to the people I know."}, {"start": 255.12, "end": 258.56, "text": " And today, this is why two-minute papers exist,"}, {"start": 258.56, "end": 264.08, "text": " and it can only exist with such an amazing and receptive audience as you are."}, {"start": 264.08, "end": 268.4, "text": " So, thank you so much for coming with us on this glorious journey,"}, {"start": 268.4, "end": 271.2, "text": " and make sure to subscribe and hit the bell icon."}, {"start": 271.2, "end": 274.32, "text": " If you wish to see more outstanding works like this,"}, {"start": 274.32, "end": 276.8, "text": " we have a great deal more coming up for you."}, {"start": 276.8, "end": 280.0, "text": " Percepti Labs is a visual API for TensorFlow"}, {"start": 280.0, "end": 284.88, "text": " carefully designed to make machine learning as intuitive as possible."}, {"start": 284.88, "end": 287.84000000000003, "text": " This gives you a faster way to build out models"}, {"start": 287.84000000000003, "end": 291.68, "text": " with more transparency into how your model is architected,"}, {"start": 291.68, "end": 294.8, "text": " how it performs, and how to debug it."}, {"start": 294.8, "end": 299.2, "text": " And it even generates visualizations for all the model variables,"}, {"start": 299.2, "end": 303.68, "text": " and gives you recommendations both during modeling and training,"}, {"start": 303.68, "end": 306.48, "text": " and does all this automatically."}, {"start": 306.48, "end": 310.8, "text": " I only wish I had a tool like this when I was working on my neural networks"}, {"start": 310.8, "end": 312.64000000000004, "text": " during my PhD years."}, {"start": 312.64000000000004, "end": 315.84000000000003, "text": " Visit perceptilabs.com, slash papers,"}, {"start": 315.84000000000003, "end": 318.96000000000004, "text": " and start using their system for free today."}, {"start": 318.96000000000004, "end": 321.52000000000004, "text": " Our thanks to perceptilabs for their support,"}, {"start": 321.52000000000004, "end": 324.24, "text": " and for helping us make better videos for you."}, {"start": 324.24, "end": 326.48, "text": " Thanks for watching, and for your generous support,"}, {"start": 326.48, "end": 336.48, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MCq0x01Jmi0
New AI: Next Level Video Editing! 🤯
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper "Layered Neural Atlases for Consistent Video Editing" is available here: https://layered-neural-atlases.github.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to spice up this family video with a new stylized environment. And then we make a video of a boat trip much more memorable, up the authenticity of a parkour video, decorate a swan, enhance our clothing a little, and conveniently forget about a speeding motorcycle. Only for scientific purposes, of course. So, these are all very challenging tasks to perform, and of course none of these should be possible. And this is a new AIB solution that can pull off all of these. But how? Well, in goes an input video and this AI decomposes it into a foreground and background, but in a way that it understands that this is just a 2D video that represents a 3D world. Clearly humans understand this, but does Adobe's new AI do it too? And I wonder how much it understands about that. Well, let's give it a try together. Put those flowers on the address and... What? The flowers look like they are really there as they wrinkle, as the dress wrinkles, and it catches shadows just as the dress catches shadows too. That is absolutely incredible. It supports these high frequency movements so well that we can even stylize archite-sealing videos with it where there is tons of tiny water droplets flying about. No problems at all. We can also draw on this dog and remember we mentioned that it understands the difference between foreground and background and look. The scribble correctly travels behind objects. Aha, and this is also the reason why we can easily remove a speeding motorbike from this security footage. Just cut out the foreground layer. Nothing to see here, but I wonder can we go a little more extreme here? And it turns out these are really nothing compared to what this new AI can pull off. Look, we can not only decorate this one, but here is the key. And, oh yes, the one is fine, yes, but the reflection of this one is also computed correctly. Now, this feels like black magic and we are not even done yet. Now, hold on to your papers because here come my two favorite examples. Example number one, biking in Wonderland. I love this. Now, you see here that not even this technique is perfect. If you look behind the spokes of the wheel, you see a fair amount of warping. I still think this is a miracle that it can be pulled off at all. Example number two, a picturesque trip. Here not only the background has been changed, but even the water has changed as chunks of ice have also been added. And with that, I wonder how easy it is to do this. I'll tell you in a moment after we look at this. Yes, there is also one issue here with the warping of the water in the wake of the boat, but this is an excellent point for us to invoke the first law of papers, which says, do not look at where we are, look at where we will be, two more papers down the line. So, how much work do we have to put in to pull all this off? A day of work maybe? Nope, not even close. If you have been holding onto your paper so far, now squeeze that paper and look at this. We can unwrap the background of the footage and stylize it as we please. All this takes is just editing one image. This goes in, this comes out. Wow, absolutely anyone can do this. So, this is a huge step in democratizing video stylization and bringing it to everyone. What a time to be alive. This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung and many more prestigious labs. Make sure to use the link WNB.me slash paper intro or just click the link in the video description and try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 4.64, "end": 10.64, "text": " Today we are going to spice up this family video with a new stylized environment."}, {"start": 11.34, "end": 15.72, "text": " And then we make a video of a boat trip much more memorable,"}, {"start": 16.42, "end": 19.42, "text": " up the authenticity of a parkour video,"}, {"start": 20.42, "end": 22.02, "text": " decorate a swan,"}, {"start": 23.86, "end": 26.16, "text": " enhance our clothing a little,"}, {"start": 26.16, "end": 31.04, "text": " and conveniently forget about a speeding motorcycle."}, {"start": 31.54, "end": 33.78, "text": " Only for scientific purposes, of course."}, {"start": 34.1, "end": 37.74, "text": " So, these are all very challenging tasks to perform,"}, {"start": 37.74, "end": 40.58, "text": " and of course none of these should be possible."}, {"start": 40.94, "end": 45.22, "text": " And this is a new AIB solution that can pull off all of these."}, {"start": 45.72, "end": 47.019999999999996, "text": " But how?"}, {"start": 47.519999999999996, "end": 53.96, "text": " Well, in goes an input video and this AI decomposes it into a foreground and background,"}, {"start": 53.96, "end": 61.54, "text": " but in a way that it understands that this is just a 2D video that represents a 3D world."}, {"start": 62.04, "end": 67.24000000000001, "text": " Clearly humans understand this, but does Adobe's new AI do it too?"}, {"start": 67.74000000000001, "end": 71.08, "text": " And I wonder how much it understands about that."}, {"start": 71.72, "end": 73.72, "text": " Well, let's give it a try together."}, {"start": 73.96000000000001, "end": 76.24000000000001, "text": " Put those flowers on the address and..."}, {"start": 77.08, "end": 77.58, "text": " What?"}, {"start": 78.12, "end": 82.66, "text": " The flowers look like they are really there as they wrinkle, as the dress wrinkles,"}, {"start": 82.66, "end": 87.25999999999999, "text": " and it catches shadows just as the dress catches shadows too."}, {"start": 87.56, "end": 89.75999999999999, "text": " That is absolutely incredible."}, {"start": 90.1, "end": 95.28, "text": " It supports these high frequency movements so well that we can even stylize"}, {"start": 95.28, "end": 100.72, "text": " archite-sealing videos with it where there is tons of tiny water droplets flying about."}, {"start": 101.06, "end": 102.3, "text": " No problems at all."}, {"start": 102.3, "end": 111.19999999999999, "text": " We can also draw on this dog and remember we mentioned that it understands the difference between foreground and background and look."}, {"start": 111.2, "end": 114.8, "text": " The scribble correctly travels behind objects."}, {"start": 115.5, "end": 122.9, "text": " Aha, and this is also the reason why we can easily remove a speeding motorbike from this security footage."}, {"start": 123.16, "end": 125.16, "text": " Just cut out the foreground layer."}, {"start": 125.76, "end": 131.08, "text": " Nothing to see here, but I wonder can we go a little more extreme here?"}, {"start": 131.08, "end": 137.02, "text": " And it turns out these are really nothing compared to what this new AI can pull off."}, {"start": 137.24, "end": 140.32, "text": " Look, we can not only decorate this one,"}, {"start": 140.32, "end": 142.07999999999998, "text": " but here is the key."}, {"start": 142.07999999999998, "end": 150.07999999999998, "text": " And, oh yes, the one is fine, yes, but the reflection of this one is also computed correctly."}, {"start": 150.51999999999998, "end": 154.44, "text": " Now, this feels like black magic and we are not even done yet."}, {"start": 154.6, "end": 159.64, "text": " Now, hold on to your papers because here come my two favorite examples."}, {"start": 160.16, "end": 163.35999999999999, "text": " Example number one, biking in Wonderland."}, {"start": 163.35999999999999, "end": 165.16, "text": " I love this."}, {"start": 165.16, "end": 169.68, "text": " Now, you see here that not even this technique is perfect."}, {"start": 169.68, "end": 174.16, "text": " If you look behind the spokes of the wheel, you see a fair amount of warping."}, {"start": 174.16, "end": 178.32, "text": " I still think this is a miracle that it can be pulled off at all."}, {"start": 178.32, "end": 182.07999999999998, "text": " Example number two, a picturesque trip."}, {"start": 182.07999999999998, "end": 189.56, "text": " Here not only the background has been changed, but even the water has changed as chunks of ice have also been added."}, {"start": 189.56, "end": 193.6, "text": " And with that, I wonder how easy it is to do this."}, {"start": 193.6, "end": 196.96, "text": " I'll tell you in a moment after we look at this."}, {"start": 197.51999999999998, "end": 202.76, "text": " Yes, there is also one issue here with the warping of the water in the wake of the boat,"}, {"start": 202.76, "end": 207.35999999999999, "text": " but this is an excellent point for us to invoke the first law of papers,"}, {"start": 207.35999999999999, "end": 209.92, "text": " which says, do not look at where we are,"}, {"start": 209.92, "end": 213.48, "text": " look at where we will be, two more papers down the line."}, {"start": 213.48, "end": 218.0, "text": " So, how much work do we have to put in to pull all this off?"}, {"start": 218.0, "end": 219.88, "text": " A day of work maybe?"}, {"start": 219.88, "end": 222.0, "text": " Nope, not even close."}, {"start": 222.0, "end": 228.2, "text": " If you have been holding onto your paper so far, now squeeze that paper and look at this."}, {"start": 228.2, "end": 233.08, "text": " We can unwrap the background of the footage and stylize it as we please."}, {"start": 233.08, "end": 236.6, "text": " All this takes is just editing one image."}, {"start": 236.6, "end": 239.12, "text": " This goes in, this comes out."}, {"start": 239.12, "end": 242.44, "text": " Wow, absolutely anyone can do this."}, {"start": 242.44, "end": 249.04, "text": " So, this is a huge step in democratizing video stylization and bringing it to everyone."}, {"start": 249.04, "end": 250.96, "text": " What a time to be alive."}, {"start": 250.96, "end": 254.52, "text": " This video has been supported by weights and biases."}, {"start": 254.52, "end": 259.92, "text": " Being a machine learning researcher means doing tons of experiments and, of course,"}, {"start": 259.92, "end": 262.24, "text": " creating tons of data."}, {"start": 262.24, "end": 266.68, "text": " But I am not looking for data, I am looking for insights."}, {"start": 266.68, "end": 270.04, "text": " And weights and biases helps with exactly that."}, {"start": 270.04, "end": 274.48, "text": " They have tools for experiment tracking, data set and model versioning"}, {"start": 274.48, "end": 277.88, "text": " and even hyper parameter optimization."}, {"start": 277.88, "end": 284.6, "text": " No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung"}, {"start": 284.6, "end": 287.24, "text": " and many more prestigious labs."}, {"start": 287.24, "end": 295.44, "text": " Make sure to use the link WNB.me slash paper intro or just click the link in the video description"}, {"start": 295.44, "end": 301.32, "text": " and try this 10 minute example of weights and biases today to experience the wonderful"}, {"start": 301.32, "end": 307.56, "text": " feeling of training a neural network and being in control of your experiments."}, {"start": 307.56, "end": 309.84, "text": " After you try it, you won't want to go back."}, {"start": 309.84, "end": 339.4, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=yptwRRpPEBM
Photos Go In, Reality Comes Out…And Fast! 🌁
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Plenoxels: Radiance Fields without Neural Networks" is available here: https://alexyu.net/plenoxels/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #plenoxels
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today, we are going to take a collection of photos like these and magically create a video where we can fly through these photos. And it gets better, we will be able to do it quickly and get this no AI is required. So, how is this even possible? Especially that the input is only a handful of photos. Well, typically we give it to a learning algorithm and ask it to synthesize a photo realistic video where we fly through the scene as we please. Of course, that sounds impossible. Especially that some information is given about the scene, but this is really not that much. And as you see, this is not impossible at all through the power of learning-based techniques. This previous AI is already capable of pulling off this amazing trick. And today, I am going to show you that through this incredible no paper, something like this can even be done at home on our own machines. Now, the previously showcased technique and its predecessors are building on gathering training data and training a neural network to pull this off. Here you see the training process of one of them compared to the reference results. Well, it looks like we need to be really patient as this process is quite lengthy and for the majority of the time, we don't get any usable results for nearly a day into this process. Now, hold on to your papers because here comes the twist. What is the twist? Well, these are not reference results. No, no, these are the results from the new technique. Yes, you heard it right. It doesn't require a neural network and thus trains so quickly that it almost immediately looks like the final result while the original technique is still unable to produce anything usable. That is absolutely insane. Okay, so it's quick, real quick. But how good are the results? Well, the previous technique was able to produce this after approximately one and a half days of training. And what about the new technique? All it needs is 8.8. 8.8 what? Days hours? No, no. 8.8 minutes. And the result looks like this. Not only as good, but even a bit better than what the previous method could do in one and a half days. Whoa! So, I mentioned that the results are typically even a bit better, which is quite remarkable. Let's take a closer look. This is the previous technique after more than a day. This is the new method after 18 minutes. Now, it says that the new technique is 0.3 decibels better. That does not sound like much, does it? Well, note that the decibel scale is not linear, it is logarithmic. What does this mean? It means this. Look, a small numerical difference in the numbers can mean a big difference in quality. And it is really close to the real results. All this after 20 minutes of processing, bravo. And it does not stop there. The technique is also quite robust. It works well on forward-facing scenes, 360-degree rotations, and we can even use it to disassemble a scene into its foreground and background elements. Note that the previous nerve technique we compared to was published just about a year and a half ago. Such incredible improvement in so little time. Here comes the kicker. All this is possible today with a handcrafted technique. No AI is required. What a time to be alive! Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com, slash papers, and start using their system for free today. Our thanks to perceptilebs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.64, "end": 11.44, "text": " Today, we are going to take a collection of photos like these and magically create a video"}, {"start": 11.44, "end": 13.76, "text": " where we can fly through these photos."}, {"start": 13.76, "end": 21.04, "text": " And it gets better, we will be able to do it quickly and get this no AI is required."}, {"start": 21.04, "end": 23.68, "text": " So, how is this even possible?"}, {"start": 23.68, "end": 28.080000000000002, "text": " Especially that the input is only a handful of photos."}, {"start": 28.08, "end": 34.96, "text": " Well, typically we give it to a learning algorithm and ask it to synthesize a photo realistic video"}, {"start": 34.96, "end": 38.16, "text": " where we fly through the scene as we please."}, {"start": 38.16, "end": 40.8, "text": " Of course, that sounds impossible."}, {"start": 40.8, "end": 44.72, "text": " Especially that some information is given about the scene,"}, {"start": 44.72, "end": 47.28, "text": " but this is really not that much."}, {"start": 47.28, "end": 53.28, "text": " And as you see, this is not impossible at all through the power of learning-based techniques."}, {"start": 53.28, "end": 58.72, "text": " This previous AI is already capable of pulling off this amazing trick."}, {"start": 58.72, "end": 63.44, "text": " And today, I am going to show you that through this incredible no paper,"}, {"start": 63.44, "end": 68.48, "text": " something like this can even be done at home on our own machines."}, {"start": 68.48, "end": 74.96000000000001, "text": " Now, the previously showcased technique and its predecessors are building on gathering training data"}, {"start": 74.96000000000001, "end": 78.08, "text": " and training a neural network to pull this off."}, {"start": 78.08, "end": 83.04, "text": " Here you see the training process of one of them compared to the reference results."}, {"start": 83.04, "end": 88.32000000000001, "text": " Well, it looks like we need to be really patient as this process is quite lengthy"}, {"start": 88.32000000000001, "end": 94.4, "text": " and for the majority of the time, we don't get any usable results for nearly a day into this process."}, {"start": 95.12, "end": 99.04, "text": " Now, hold on to your papers because here comes the twist."}, {"start": 99.68, "end": 100.96000000000001, "text": " What is the twist?"}, {"start": 100.96000000000001, "end": 104.32000000000001, "text": " Well, these are not reference results."}, {"start": 104.32000000000001, "end": 107.36000000000001, "text": " No, no, these are the results from the new technique."}, {"start": 108.0, "end": 109.68, "text": " Yes, you heard it right."}, {"start": 109.68, "end": 116.64, "text": " It doesn't require a neural network and thus trains so quickly that it almost immediately looks like"}, {"start": 116.64, "end": 122.56, "text": " the final result while the original technique is still unable to produce anything usable."}, {"start": 123.28, "end": 125.36000000000001, "text": " That is absolutely insane."}, {"start": 125.92000000000002, "end": 129.12, "text": " Okay, so it's quick, real quick."}, {"start": 129.12, "end": 132.16, "text": " But how good are the results?"}, {"start": 132.16, "end": 139.92, "text": " Well, the previous technique was able to produce this after approximately one and a half days of training."}, {"start": 140.07999999999998, "end": 142.32, "text": " And what about the new technique?"}, {"start": 142.32, "end": 144.72, "text": " All it needs is 8.8."}, {"start": 144.72, "end": 146.24, "text": " 8.8 what?"}, {"start": 146.24, "end": 148.0, "text": " Days hours?"}, {"start": 148.0, "end": 148.88, "text": " No, no."}, {"start": 148.88, "end": 150.8, "text": " 8.8 minutes."}, {"start": 150.8, "end": 153.35999999999999, "text": " And the result looks like this."}, {"start": 153.35999999999999, "end": 160.24, "text": " Not only as good, but even a bit better than what the previous method could do in one and a half days."}, {"start": 160.24, "end": 161.52, "text": " Whoa!"}, {"start": 161.52, "end": 167.76000000000002, "text": " So, I mentioned that the results are typically even a bit better, which is quite remarkable."}, {"start": 167.76000000000002, "end": 169.68, "text": " Let's take a closer look."}, {"start": 171.52, "end": 174.4, "text": " This is the previous technique after more than a day."}, {"start": 175.20000000000002, "end": 178.48000000000002, "text": " This is the new method after 18 minutes."}, {"start": 179.12, "end": 183.52, "text": " Now, it says that the new technique is 0.3 decibels better."}, {"start": 184.08, "end": 186.4, "text": " That does not sound like much, does it?"}, {"start": 186.4, "end": 191.84, "text": " Well, note that the decibel scale is not linear, it is logarithmic."}, {"start": 191.84, "end": 193.44, "text": " What does this mean?"}, {"start": 193.44, "end": 194.64000000000001, "text": " It means this."}, {"start": 194.64000000000001, "end": 200.64000000000001, "text": " Look, a small numerical difference in the numbers can mean a big difference in quality."}, {"start": 200.64000000000001, "end": 203.36, "text": " And it is really close to the real results."}, {"start": 203.36, "end": 207.68, "text": " All this after 20 minutes of processing, bravo."}, {"start": 207.68, "end": 209.92000000000002, "text": " And it does not stop there."}, {"start": 209.92000000000002, "end": 212.56, "text": " The technique is also quite robust."}, {"start": 212.56, "end": 217.68, "text": " It works well on forward-facing scenes, 360-degree rotations,"}, {"start": 217.68, "end": 224.24, "text": " and we can even use it to disassemble a scene into its foreground and background elements."}, {"start": 224.8, "end": 230.88, "text": " Note that the previous nerve technique we compared to was published just about a year and a half ago."}, {"start": 231.44, "end": 234.64000000000001, "text": " Such incredible improvement in so little time."}, {"start": 235.2, "end": 236.48000000000002, "text": " Here comes the kicker."}, {"start": 236.48000000000002, "end": 240.56, "text": " All this is possible today with a handcrafted technique."}, {"start": 240.56, "end": 242.72, "text": " No AI is required."}, {"start": 242.72, "end": 244.4, "text": " What a time to be alive!"}, {"start": 244.4, "end": 247.68, "text": " Perceptilebs is a visual API for TensorFlow,"}, {"start": 247.68, "end": 252.56, "text": " carefully designed to make machine learning as intuitive as possible."}, {"start": 252.56, "end": 256.8, "text": " This gives you a faster way to build out models with more transparency"}, {"start": 256.8, "end": 262.4, "text": " into how your model is architected, how it performs, and how to debug it."}, {"start": 262.4, "end": 266.8, "text": " And it even generates visualizations for all the model variables"}, {"start": 266.8, "end": 271.2, "text": " and gives you recommendations both during modeling and training"}, {"start": 271.2, "end": 274.08, "text": " and does all this automatically."}, {"start": 274.08, "end": 278.40000000000003, "text": " I only wish I had a tool like this when I was working on my neural networks"}, {"start": 278.40000000000003, "end": 280.24, "text": " during my PhD years."}, {"start": 280.24, "end": 286.56, "text": " Visit perceptilebs.com, slash papers, and start using their system for free today."}, {"start": 286.56, "end": 291.92, "text": " Our thanks to perceptilebs for their support and for helping us make better videos for you."}, {"start": 291.92, "end": 299.52000000000004, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=55PJtqpXAm4
Stanford Invented The Ultimate Bouncy Simulator! 🏀
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Bounce Maps: An Improved Restitution Model for Real-Time Rigid-Body Impact" is available here: https://graphics.stanford.edu/projects/bouncemap/ 📝 The amazing previous works: - Input video, output sound - https://www.youtube.com/watch?v=kwqme8mEgz4 - Input sound, output video - https://www.youtube.com/watch?v=aMo7pkkaZ9o ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today we are going to design a crazy baseball bat, a hockey stick that's not working well, springs that won't stop bouncing, and letters that can kind of stop bouncing. And yes, you see it correctly, this is a paper from 2017. Why? Well, there are some good works that are timeless. This is one of them. You will see why in a moment. When I read this paper, I saw that it is from Professor Doug James's group at Stanford University, and at this point I knew that crazy things are to be expected. Our season-fellow scholars know that these folks do some absolutely amazing things that shouldn't even be possible. For instance, one of their earlier papers takes an animation and the physics data for these bubbles, and impossible as it might appear, synthesizes the sound of these virtual bubbles. So, video goes in, sound comes out. And hold onto your papers because that's nothing compared to their other work, which takes not the video, but the sound that we recorded. So, the sound goes in, that's easy enough, and what does it do? It creates a video animation that matches these sounds. That's a crazy paper that works exceptionally well. We love those around here. And to top it off, both of these were handcrafted techniques, no machine learning was applied. So, what about today's paper? Well, today's paper is about designing things that don't work, or things that work too well. What does that mean? Well, let me explain. Here is a map that shows the physical bounce in a parameter for a not too exciting old simulation. Everything and every part does the same. And here is the new method. Red means bouncy, blue means stiff. And now on to the simulation. The old one with the fixed parameters, well, not bad, but it is not too exciting either. And here is the new one. Now that is a simulation that has some personality. So, with this, we can reimagine as if parts of things were made of rubber and other parts of wood were steel. Look, here the red color on the knob of the baseball bat means that this will be made bouncy while the end will be made stiff. What does this do? Well, let's see together. This is the reference simulation where every part has the same bounce in us. And now, let's see that bouncing knob. There we go. With this, we can unleash our artistic vision and get a simulation that works properly given these crazy material parameters. Now, let's design a crazy hockey stick. This part of the hockey stick is very bouncy. This part too, however, this part will be the sweet spot, at least for this experiment. Let's hit that pack and see how it behaves. Yes, from the red bouncy regions, indeed, the pack rebounds a great deal. And now, let's see the sweet spot. Yes, it rebounds much less. Let's see all of them side by side. Now, creating such an algorithm is quite a challenge. Look, it has to work on smooth geometry built from hundreds of thousands of triangles. And one of the key challenges is that the duration of the context can be what? Are you seeing what I am seeing? The duration of some of these contexts is measured in the order of tens of microseconds. And it still works well and it's still accurate. That is absolutely amazing. Now, of course, even though we can have detailed geometry made of crazy new materials, this is not only a great tool for artists, this could also help with contact analysis and other cool engineering applications where we manufacture things that hit each other. A glorious. So, this is an amazing, timeless work from 2017 and I am worried that if we don't talk about it here on two minute papers, almost no one will talk about it. And these works are so good. People have to know. Thank you very much for watching this and let's spread the word together. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 11.6, "text": " Today we are going to design a crazy baseball bat, a hockey stick that's not working well,"}, {"start": 11.6, "end": 19.2, "text": " springs that won't stop bouncing, and letters that can kind of stop bouncing."}, {"start": 19.2, "end": 24.400000000000002, "text": " And yes, you see it correctly, this is a paper from 2017."}, {"start": 25.04, "end": 26.0, "text": " Why?"}, {"start": 26.0, "end": 29.68, "text": " Well, there are some good works that are timeless."}, {"start": 29.68, "end": 31.28, "text": " This is one of them."}, {"start": 31.28, "end": 33.04, "text": " You will see why in a moment."}, {"start": 33.04, "end": 39.36, "text": " When I read this paper, I saw that it is from Professor Doug James's group at Stanford University,"}, {"start": 39.36, "end": 44.24, "text": " and at this point I knew that crazy things are to be expected."}, {"start": 44.24, "end": 50.8, "text": " Our season-fellow scholars know that these folks do some absolutely amazing things that shouldn't"}, {"start": 50.8, "end": 52.239999999999995, "text": " even be possible."}, {"start": 52.239999999999995, "end": 59.120000000000005, "text": " For instance, one of their earlier papers takes an animation and the physics data for these bubbles,"}, {"start": 59.12, "end": 65.52, "text": " and impossible as it might appear, synthesizes the sound of these virtual bubbles."}, {"start": 65.52, "end": 68.88, "text": " So, video goes in, sound comes out."}, {"start": 77.44, "end": 82.72, "text": " And hold onto your papers because that's nothing compared to their other work,"}, {"start": 82.72, "end": 87.75999999999999, "text": " which takes not the video, but the sound that we recorded."}, {"start": 87.76, "end": 92.88000000000001, "text": " So, the sound goes in, that's easy enough, and what does it do?"}, {"start": 92.88000000000001, "end": 96.96000000000001, "text": " It creates a video animation that matches these sounds."}, {"start": 104.72, "end": 108.56, "text": " That's a crazy paper that works exceptionally well."}, {"start": 108.56, "end": 110.4, "text": " We love those around here."}, {"start": 110.4, "end": 116.72, "text": " And to top it off, both of these were handcrafted techniques, no machine learning was applied."}, {"start": 116.72, "end": 118.96, "text": " So, what about today's paper?"}, {"start": 118.96, "end": 125.76, "text": " Well, today's paper is about designing things that don't work, or things that work too well."}, {"start": 126.4, "end": 127.76, "text": " What does that mean?"}, {"start": 127.76, "end": 129.68, "text": " Well, let me explain."}, {"start": 129.68, "end": 136.56, "text": " Here is a map that shows the physical bounce in a parameter for a not too exciting old simulation."}, {"start": 136.56, "end": 139.92, "text": " Everything and every part does the same."}, {"start": 139.92, "end": 142.0, "text": " And here is the new method."}, {"start": 142.0, "end": 145.2, "text": " Red means bouncy, blue means stiff."}, {"start": 145.2, "end": 147.6, "text": " And now on to the simulation."}, {"start": 147.6, "end": 154.32, "text": " The old one with the fixed parameters, well, not bad, but it is not too exciting either."}, {"start": 154.32, "end": 156.32, "text": " And here is the new one."}, {"start": 157.67999999999998, "end": 161.51999999999998, "text": " Now that is a simulation that has some personality."}, {"start": 161.51999999999998, "end": 167.51999999999998, "text": " So, with this, we can reimagine as if parts of things were made of rubber"}, {"start": 167.51999999999998, "end": 171.04, "text": " and other parts of wood were steel."}, {"start": 171.04, "end": 177.76, "text": " Look, here the red color on the knob of the baseball bat means that this will be made bouncy"}, {"start": 177.76, "end": 180.56, "text": " while the end will be made stiff."}, {"start": 180.56, "end": 181.76, "text": " What does this do?"}, {"start": 181.76, "end": 183.76, "text": " Well, let's see together."}, {"start": 183.76, "end": 188.39999999999998, "text": " This is the reference simulation where every part has the same bounce in us."}, {"start": 189.12, "end": 191.28, "text": " And now, let's see that bouncing knob."}, {"start": 192.16, "end": 193.04, "text": " There we go."}, {"start": 193.04, "end": 198.79999999999998, "text": " With this, we can unleash our artistic vision and get a simulation that works properly"}, {"start": 198.8, "end": 201.68, "text": " given these crazy material parameters."}, {"start": 201.68, "end": 204.72, "text": " Now, let's design a crazy hockey stick."}, {"start": 204.72, "end": 207.60000000000002, "text": " This part of the hockey stick is very bouncy."}, {"start": 207.60000000000002, "end": 214.08, "text": " This part too, however, this part will be the sweet spot, at least for this experiment."}, {"start": 214.08, "end": 216.72000000000003, "text": " Let's hit that pack and see how it behaves."}, {"start": 217.36, "end": 222.4, "text": " Yes, from the red bouncy regions, indeed, the pack rebounds a great deal."}, {"start": 222.96, "end": 224.96, "text": " And now, let's see the sweet spot."}, {"start": 225.92000000000002, "end": 228.48000000000002, "text": " Yes, it rebounds much less."}, {"start": 228.48, "end": 230.64, "text": " Let's see all of them side by side."}, {"start": 231.11999999999998, "end": 234.72, "text": " Now, creating such an algorithm is quite a challenge."}, {"start": 234.72, "end": 241.44, "text": " Look, it has to work on smooth geometry built from hundreds of thousands of triangles."}, {"start": 241.44, "end": 245.76, "text": " And one of the key challenges is that the duration of the context can be"}, {"start": 246.56, "end": 246.88, "text": " what?"}, {"start": 247.67999999999998, "end": 249.83999999999997, "text": " Are you seeing what I am seeing?"}, {"start": 250.39999999999998, "end": 256.71999999999997, "text": " The duration of some of these contexts is measured in the order of tens of microseconds."}, {"start": 256.72, "end": 260.48, "text": " And it still works well and it's still accurate."}, {"start": 260.8, "end": 263.28000000000003, "text": " That is absolutely amazing."}, {"start": 263.28000000000003, "end": 269.04, "text": " Now, of course, even though we can have detailed geometry made of crazy new materials,"}, {"start": 269.04, "end": 274.88000000000005, "text": " this is not only a great tool for artists, this could also help with contact analysis"}, {"start": 274.88000000000005, "end": 280.56, "text": " and other cool engineering applications where we manufacture things that hit each other."}, {"start": 280.56, "end": 281.6, "text": " A glorious."}, {"start": 281.6, "end": 289.04, "text": " So, this is an amazing, timeless work from 2017 and I am worried that if we don't talk about it here"}, {"start": 289.04, "end": 292.72, "text": " on two minute papers, almost no one will talk about it."}, {"start": 292.72, "end": 294.88, "text": " And these works are so good."}, {"start": 294.88, "end": 296.40000000000003, "text": " People have to know."}, {"start": 296.40000000000003, "end": 300.64000000000004, "text": " Thank you very much for watching this and let's spread the word together."}, {"start": 300.64000000000004, "end": 304.16, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 304.16, "end": 310.08000000000004, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 310.08, "end": 316.96, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 316.96, "end": 324.56, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 324.56, "end": 329.84, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 329.84, "end": 334.4, "text": " Join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 334.4, "end": 338.15999999999997, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 338.16, "end": 344.40000000000003, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 344.40000000000003, "end": 344.96000000000004, "text": " today."}, {"start": 344.96000000000004, "end": 350.56, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 350.56, "end": 380.4, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=S-Jj3ybaUNg
This New AI Creates Lava From Water! 🌊
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "VGPNN: Diverse Generation from a Single Video Made Possible" is available here: https://nivha.github.io/vgpnn/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. This new learning-based method is capable of generating new videos, creating lava from water, enhancing your dance videos, adding new players to a football game, and more. This one will feel like it can do absolutely anything. For instance, have a look at these videos. Believe it or not, some of these were generated by an AI from this new paper. Let's try to find out together which ones are real and which are generated. Now, before we start, many of the results that you will see here will be flawed, but I promise that there will be some gems in there too. This one doesn't make too much sense. One down. Two, the bus appears to be changing length from time to time. Three, the bus pops out of existence. Four, wait, this actually looks pretty good until we find out that someone is about to get rear-ended. This leaves us with two examples. Are they real or are they fixed? Now, please stop the video and let me know in the comments what you think. So, this one is real footage. Okay, what about this guy? Well, get this, this one is actually one of the generated ones. Just this one was real and this is what this new technique is capable of. We give it a sample video and it re-imagines variations of it. Of course, the results are hit or miss. For instance, here, people and cars just pop in and out of existence, but if we try it on the billiard balls, wow, now this is something. Look at how well it preserved the specular highlights and also the shadows move with the balls in the synthetic variations too. Once again, the results are generally hit or miss, but if we look through a few results, we often find some jumps out there. But this is nothing compared to what is to come. Let's call this first feature video synthesis and we have four more really cool applications with somewhat flawed but sometimes amazing results. Now, check this out. Here comes number two video analogies. This makes it possible for us to mix two videos that depict things that follow a similar logic. For instance, here only four of the 16 videos are real, the rest are all generated. Now, here comes feature number three time retargeting. For instance, we can lengthen or shorten videos. Well, that sounds simple enough, no need for an AI for that, but here's the key. This does not mean just adding new frames to the end of the video. Look, they are different. Yes, the entirety of the video is getting redesigned here. We can use this to lengthen videos without really adding new content to it. Absolutely amazing. We can also use it to shorten a video. Now, of course, once again, this doesn't mean that it just chops off the end of the video. No, no, it means something much more challenging. Look, it makes sure that all of your killer moves make it into the video, but in a shorter amount of time. It makes the video tighter if you will. Amazing feature number four, sketch to video. Here, we can take an input video, add a crew drawing, and the video will follow this drawing. The result is, of course, not perfect, but in return, this can handle many number to number transitions. And now, feature number five, video in painting on steroids. Previous techniques can help us delete part of an image or even a video and generate data in these holes that make sense given their surroundings. But this one does something way better. Look, we can cut out different parts of the video and mark the missing region with the color, and then what happens? Oh, yes. The blue regions will contain a player from the blue team, the white region, a player from the white team, or if we wish to get someone out of the way, we can do that too. Just mark them green. Here, the white regions will be impainted with clouds and the blues with birds, loving it. But, wait a second, we noted that there are already good techniques out there for image impainting. There are also good techniques out there for video time retargeting. So, what is so interesting here? Well, two things. One, here, all of these things are being done with just one technique. Normally, we would need a collection of different methods to perform all of these. But here, just one AI. Two, now hold on to your papers and look at this. Whoa! Previous techniques take forever to generate these results. 144p means an extremely crude, pixelated image, and even then they take from hours to my goodness, even days. So, what about this new one? It can generate much higher resolution images and in a matter of minutes. That is an incredible leap in just one paper. Now, clearly, most of these results aren't perfect. I would argue that they are not even close to perfect. But some of them already show a lot of promise. And, as always, dear fellow scholars, don't not forget to apply the first law of papers. Which says, don't look at where we are, look at where we will be, two more papers down the line. And two more papers down the line, I am sure that not two, but five, or maybe even six out of these six videos, will look significantly more real. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 4.48, "end": 9.200000000000001, "text": " This new learning-based method is capable of generating new videos,"}, {"start": 9.200000000000001, "end": 13.44, "text": " creating lava from water, enhancing your dance videos,"}, {"start": 13.44, "end": 16.72, "text": " adding new players to a football game, and more."}, {"start": 16.72, "end": 20.400000000000002, "text": " This one will feel like it can do absolutely anything."}, {"start": 20.400000000000002, "end": 23.2, "text": " For instance, have a look at these videos."}, {"start": 23.2, "end": 28.72, "text": " Believe it or not, some of these were generated by an AI from this new paper."}, {"start": 28.72, "end": 33.6, "text": " Let's try to find out together which ones are real and which are generated."}, {"start": 33.6, "end": 38.32, "text": " Now, before we start, many of the results that you will see here will be flawed,"}, {"start": 38.32, "end": 42.32, "text": " but I promise that there will be some gems in there too."}, {"start": 42.32, "end": 44.48, "text": " This one doesn't make too much sense."}, {"start": 44.48, "end": 45.519999999999996, "text": " One down."}, {"start": 45.519999999999996, "end": 49.68, "text": " Two, the bus appears to be changing length from time to time."}, {"start": 50.480000000000004, "end": 53.36, "text": " Three, the bus pops out of existence."}, {"start": 53.36, "end": 62.56, "text": " Four, wait, this actually looks pretty good until we find out that someone is about to get"}, {"start": 62.56, "end": 63.6, "text": " rear-ended."}, {"start": 63.6, "end": 65.76, "text": " This leaves us with two examples."}, {"start": 65.76, "end": 68.72, "text": " Are they real or are they fixed?"}, {"start": 68.72, "end": 72.32, "text": " Now, please stop the video and let me know in the comments what you think."}, {"start": 73.2, "end": 76.16, "text": " So, this one is real footage."}, {"start": 76.16, "end": 78.48, "text": " Okay, what about this guy?"}, {"start": 78.48, "end": 83.60000000000001, "text": " Well, get this, this one is actually one of the generated ones."}, {"start": 84.0, "end": 88.72, "text": " Just this one was real and this is what this new technique is capable of."}, {"start": 88.72, "end": 93.84, "text": " We give it a sample video and it re-imagines variations of it."}, {"start": 93.84, "end": 96.32000000000001, "text": " Of course, the results are hit or miss."}, {"start": 96.32000000000001, "end": 102.08000000000001, "text": " For instance, here, people and cars just pop in and out of existence,"}, {"start": 102.08000000000001, "end": 107.2, "text": " but if we try it on the billiard balls, wow, now this is something."}, {"start": 107.2, "end": 114.0, "text": " Look at how well it preserved the specular highlights and also the shadows move with the balls"}, {"start": 114.0, "end": 115.76, "text": " in the synthetic variations too."}, {"start": 115.76, "end": 121.92, "text": " Once again, the results are generally hit or miss, but if we look through a few results,"}, {"start": 121.92, "end": 124.0, "text": " we often find some jumps out there."}, {"start": 124.0, "end": 127.60000000000001, "text": " But this is nothing compared to what is to come."}, {"start": 127.60000000000001, "end": 133.68, "text": " Let's call this first feature video synthesis and we have four more really cool applications"}, {"start": 133.68, "end": 138.08, "text": " with somewhat flawed but sometimes amazing results."}, {"start": 138.08, "end": 139.84, "text": " Now, check this out."}, {"start": 139.84, "end": 142.88, "text": " Here comes number two video analogies."}, {"start": 142.88, "end": 150.24, "text": " This makes it possible for us to mix two videos that depict things that follow a similar logic."}, {"start": 150.24, "end": 156.4, "text": " For instance, here only four of the 16 videos are real, the rest are all generated."}, {"start": 157.76000000000002, "end": 161.76000000000002, "text": " Now, here comes feature number three time retargeting."}, {"start": 161.76, "end": 165.44, "text": " For instance, we can lengthen or shorten videos."}, {"start": 165.44, "end": 171.67999999999998, "text": " Well, that sounds simple enough, no need for an AI for that, but here's the key."}, {"start": 171.67999999999998, "end": 176.56, "text": " This does not mean just adding new frames to the end of the video."}, {"start": 176.56, "end": 178.39999999999998, "text": " Look, they are different."}, {"start": 179.12, "end": 183.04, "text": " Yes, the entirety of the video is getting redesigned here."}, {"start": 183.04, "end": 188.39999999999998, "text": " We can use this to lengthen videos without really adding new content to it."}, {"start": 188.39999999999998, "end": 189.6, "text": " Absolutely amazing."}, {"start": 189.6, "end": 192.72, "text": " We can also use it to shorten a video."}, {"start": 192.72, "end": 198.48, "text": " Now, of course, once again, this doesn't mean that it just chops off the end of the video."}, {"start": 198.48, "end": 202.56, "text": " No, no, it means something much more challenging."}, {"start": 202.56, "end": 207.6, "text": " Look, it makes sure that all of your killer moves make it into the video,"}, {"start": 207.6, "end": 210.24, "text": " but in a shorter amount of time."}, {"start": 210.24, "end": 212.79999999999998, "text": " It makes the video tighter if you will."}, {"start": 212.79999999999998, "end": 216.48, "text": " Amazing feature number four, sketch to video."}, {"start": 216.48, "end": 223.84, "text": " Here, we can take an input video, add a crew drawing, and the video will follow this drawing."}, {"start": 223.84, "end": 231.6, "text": " The result is, of course, not perfect, but in return, this can handle many number to number transitions."}, {"start": 232.32, "end": 237.35999999999999, "text": " And now, feature number five, video in painting on steroids."}, {"start": 237.35999999999999, "end": 244.39999999999998, "text": " Previous techniques can help us delete part of an image or even a video and generate data"}, {"start": 244.4, "end": 248.4, "text": " in these holes that make sense given their surroundings."}, {"start": 248.4, "end": 251.6, "text": " But this one does something way better."}, {"start": 251.6, "end": 258.32, "text": " Look, we can cut out different parts of the video and mark the missing region with the color,"}, {"start": 258.32, "end": 260.4, "text": " and then what happens?"}, {"start": 260.4, "end": 261.28000000000003, "text": " Oh, yes."}, {"start": 261.28000000000003, "end": 266.08, "text": " The blue regions will contain a player from the blue team, the white region,"}, {"start": 266.08, "end": 270.96000000000004, "text": " a player from the white team, or if we wish to get someone out of the way,"}, {"start": 270.96000000000004, "end": 272.24, "text": " we can do that too."}, {"start": 272.24, "end": 275.28000000000003, "text": " Just mark them green."}, {"start": 275.28000000000003, "end": 280.88, "text": " Here, the white regions will be impainted with clouds and the blues with birds,"}, {"start": 280.88, "end": 281.76, "text": " loving it."}, {"start": 281.76, "end": 288.16, "text": " But, wait a second, we noted that there are already good techniques out there for image impainting."}, {"start": 288.16, "end": 292.72, "text": " There are also good techniques out there for video time retargeting."}, {"start": 292.72, "end": 295.04, "text": " So, what is so interesting here?"}, {"start": 295.04, "end": 296.72, "text": " Well, two things."}, {"start": 296.72, "end": 302.24, "text": " One, here, all of these things are being done with just one technique."}, {"start": 302.24, "end": 307.12, "text": " Normally, we would need a collection of different methods to perform all of these."}, {"start": 307.12, "end": 309.76000000000005, "text": " But here, just one AI."}, {"start": 309.76000000000005, "end": 314.32000000000005, "text": " Two, now hold on to your papers and look at this."}, {"start": 314.32000000000005, "end": 315.36, "text": " Whoa!"}, {"start": 315.36, "end": 318.96000000000004, "text": " Previous techniques take forever to generate these results."}, {"start": 318.96000000000004, "end": 323.20000000000005, "text": " 144p means an extremely crude, pixelated image,"}, {"start": 323.2, "end": 329.59999999999997, "text": " and even then they take from hours to my goodness, even days."}, {"start": 329.59999999999997, "end": 331.76, "text": " So, what about this new one?"}, {"start": 331.76, "end": 337.36, "text": " It can generate much higher resolution images and in a matter of minutes."}, {"start": 337.36, "end": 341.12, "text": " That is an incredible leap in just one paper."}, {"start": 341.12, "end": 344.48, "text": " Now, clearly, most of these results aren't perfect."}, {"start": 344.48, "end": 347.76, "text": " I would argue that they are not even close to perfect."}, {"start": 347.76, "end": 351.28, "text": " But some of them already show a lot of promise."}, {"start": 351.28, "end": 357.2, "text": " And, as always, dear fellow scholars, don't not forget to apply the first law of papers."}, {"start": 357.2, "end": 363.11999999999995, "text": " Which says, don't look at where we are, look at where we will be, two more papers down the line."}, {"start": 363.11999999999995, "end": 368.47999999999996, "text": " And two more papers down the line, I am sure that not two, but five,"}, {"start": 368.47999999999996, "end": 373.76, "text": " or maybe even six out of these six videos, will look significantly more real."}, {"start": 373.76, "end": 377.11999999999995, "text": " This video has been supported by weights and biases."}, {"start": 377.12, "end": 381.76, "text": " They have an amazing podcast by the name Gradient Descent, where they interview"}, {"start": 381.76, "end": 388.64, "text": " machine learning experts who discuss how they use learning based algorithms to solve real world problems."}, {"start": 388.64, "end": 394.24, "text": " They've discussed biology, teaching robots, machine learning in outer space,"}, {"start": 394.24, "end": 396.16, "text": " and a whole lot more."}, {"start": 396.16, "end": 399.12, "text": " Perfect for a fellow scholar with an open mind."}, {"start": 399.12, "end": 406.64, "text": " Make sure to visit them through wnb.me slash gd or just click the link in the video description."}, {"start": 406.64, "end": 410.0, "text": " Our thanks to weights and biases for their long standing support"}, {"start": 410.0, "end": 412.88, "text": " and for helping us make better videos for you."}, {"start": 412.88, "end": 415.03999999999996, "text": " Thanks for watching and for your generous support."}, {"start": 415.03999999999996, "end": 417.28, "text": " Thanks for watching and for your generous support."}, {"start": 417.28, "end": 445.11999999999995, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mFnGBz_rPfU
New AI Makes You Play Table Tennis…In a Virtual World! 🏓
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors" is available here: https://calciferzh.github.io/publications/yi2021transpose https://xinyu-yi.github.io/TransPose/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Ejolene Fehir. Today, we are going to see how easily we can transfer our motion onto a virtual character. And, even play virtual table tennis as if we were a character in a computer game. Hmm, this research work proposes to not use the industry standard motion capture sensors to do this. Instead, they promise a neural network that can perform full body motion capture from six inertial measurement units. These are essentially gyroscopes that report accelerations and orientations. And this is the key. This presents us with two huge advantages. Hold on to your papers for advantage number one, which is, no cameras are needed. Yes, that's right. Why is that great news? Well, a camera is a vision-based system, and if people are far away from the camera, it might have some trouble making out what they are doing, of course, because it can barely see them. But not with the inertial measurement units and this neural network. They can also be further away, maybe even a room or two away, and no problem. So good. And if it doesn't even need to see us, we can hide behind different objects and look. It can still reconstruct where we are, loving it. Or we can get two people to play table tennis. We can only see the back of this player and the occlusion situation is getting even worse as they turn and jump around a great deal. Now you made me curious, let's look at the reconstruction together. Wow, that is an amazing reconstruction. In the end, if the players agree that it was a good match, they can hug it out. Or, wait a second, maybe not so much. In any case, the system still works. And advantage number two, ah, of course, since it thinks in terms of orientation, it still doesn't need to see you. So, oh yes, it also works in the dark. Note that so do some infrared camera-based motion capture systems, but here no cameras are needed. And let's see that reconstruction. There is a little jitter in the movement, but otherwise, very cool. And if you have been holding onto your paper so far, now squeeze that paper, because I am going to tell you how quickly this runs. And that is 90 frames per second, easily in real time. Now, of course, not even this technique is perfect, I am sure you noticed that there is a bit of a delay in the movements, and often some jitter too. And, as always, do not think of this paper as the final destination. Think of this as an amazingly forward, and always, always apply the first law of papers. What is that? Well, just imagine how much this could improve just two more papers down the line. I am sure there will be no delays and no jitter. So, from now on, we are one step closer to be able to play virtual table tennis, or even work out with our friends in a virtual world. What a time to be alive! This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Ejolene Fehir."}, {"start": 4.76, "end": 11.6, "text": " Today, we are going to see how easily we can transfer our motion onto a virtual character."}, {"start": 11.6, "end": 18.2, "text": " And, even play virtual table tennis as if we were a character in a computer game."}, {"start": 18.2, "end": 25.400000000000002, "text": " Hmm, this research work proposes to not use the industry standard motion capture sensors to do this."}, {"start": 25.4, "end": 34.4, "text": " Instead, they promise a neural network that can perform full body motion capture from six inertial measurement units."}, {"start": 34.4, "end": 40.4, "text": " These are essentially gyroscopes that report accelerations and orientations."}, {"start": 40.4, "end": 42.2, "text": " And this is the key."}, {"start": 42.2, "end": 45.2, "text": " This presents us with two huge advantages."}, {"start": 45.2, "end": 50.8, "text": " Hold on to your papers for advantage number one, which is, no cameras are needed."}, {"start": 50.8, "end": 52.599999999999994, "text": " Yes, that's right."}, {"start": 52.6, "end": 59.800000000000004, "text": " Why is that great news? Well, a camera is a vision-based system, and if people are far away from the camera,"}, {"start": 59.800000000000004, "end": 66.2, "text": " it might have some trouble making out what they are doing, of course, because it can barely see them."}, {"start": 66.2, "end": 70.0, "text": " But not with the inertial measurement units and this neural network."}, {"start": 70.0, "end": 76.8, "text": " They can also be further away, maybe even a room or two away, and no problem."}, {"start": 76.8, "end": 78.0, "text": " So good."}, {"start": 78.0, "end": 84.4, "text": " And if it doesn't even need to see us, we can hide behind different objects and look."}, {"start": 84.4, "end": 88.0, "text": " It can still reconstruct where we are, loving it."}, {"start": 88.0, "end": 91.8, "text": " Or we can get two people to play table tennis."}, {"start": 91.8, "end": 100.8, "text": " We can only see the back of this player and the occlusion situation is getting even worse as they turn and jump around a great deal."}, {"start": 100.8, "end": 105.4, "text": " Now you made me curious, let's look at the reconstruction together."}, {"start": 105.4, "end": 108.80000000000001, "text": " Wow, that is an amazing reconstruction."}, {"start": 108.80000000000001, "end": 114.0, "text": " In the end, if the players agree that it was a good match, they can hug it out."}, {"start": 114.0, "end": 118.2, "text": " Or, wait a second, maybe not so much."}, {"start": 118.2, "end": 121.0, "text": " In any case, the system still works."}, {"start": 121.0, "end": 129.4, "text": " And advantage number two, ah, of course, since it thinks in terms of orientation, it still doesn't need to see you."}, {"start": 129.4, "end": 133.4, "text": " So, oh yes, it also works in the dark."}, {"start": 133.4, "end": 140.6, "text": " Note that so do some infrared camera-based motion capture systems, but here no cameras are needed."}, {"start": 140.6, "end": 143.0, "text": " And let's see that reconstruction."}, {"start": 143.0, "end": 147.6, "text": " There is a little jitter in the movement, but otherwise, very cool."}, {"start": 147.6, "end": 152.8, "text": " And if you have been holding onto your paper so far, now squeeze that paper,"}, {"start": 152.8, "end": 156.20000000000002, "text": " because I am going to tell you how quickly this runs."}, {"start": 156.20000000000002, "end": 161.4, "text": " And that is 90 frames per second, easily in real time."}, {"start": 161.4, "end": 168.6, "text": " Now, of course, not even this technique is perfect, I am sure you noticed that there is a bit of a delay in the movements,"}, {"start": 168.6, "end": 171.0, "text": " and often some jitter too."}, {"start": 171.0, "end": 175.8, "text": " And, as always, do not think of this paper as the final destination."}, {"start": 175.8, "end": 181.8, "text": " Think of this as an amazingly forward, and always, always apply the first law of papers."}, {"start": 181.8, "end": 183.20000000000002, "text": " What is that?"}, {"start": 183.20000000000002, "end": 188.8, "text": " Well, just imagine how much this could improve just two more papers down the line."}, {"start": 188.8, "end": 192.60000000000002, "text": " I am sure there will be no delays and no jitter."}, {"start": 192.60000000000002, "end": 198.0, "text": " So, from now on, we are one step closer to be able to play virtual table tennis,"}, {"start": 198.0, "end": 202.4, "text": " or even work out with our friends in a virtual world."}, {"start": 202.4, "end": 204.20000000000002, "text": " What a time to be alive!"}, {"start": 204.20000000000002, "end": 207.60000000000002, "text": " This video has been supported by weights and biases."}, {"start": 207.60000000000002, "end": 214.20000000000002, "text": " Check out the recent offering fully connected, a place where they bring machine learning practitioners together"}, {"start": 214.2, "end": 222.0, "text": " to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together."}, {"start": 222.0, "end": 228.0, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by the series,"}, {"start": 228.0, "end": 231.2, "text": " but don't really know where to start."}, {"start": 231.2, "end": 232.79999999999998, "text": " And here it is."}, {"start": 232.79999999999998, "end": 238.6, "text": " Fully connected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 238.6, "end": 242.39999999999998, "text": " get your papers accepted to a conference, and more."}, {"start": 242.4, "end": 249.8, "text": " Make sure to visit them through wnb.me slash papers, or just click the link in the video description."}, {"start": 249.8, "end": 253.4, "text": " Our thanks to weights and biases for their longstanding support,"}, {"start": 253.4, "end": 256.2, "text": " and for helping us make better videos for you."}, {"start": 256.2, "end": 282.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wXaVokqhHDk
Microsoft’s AI Understands Humans…But It Had Never Seen One! 👩‍💼
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Fake It Till You Make It - Face analysis in the wild using synthetic data alone " is available here: https://microsoft.github.io/FaceSynthetics/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karojola Ifehir. None of these faces are real, and today we are going to find out whether these synthetic humans can, in a way, pass for real humans, but not quite in the sense that you might think. Through the power of computer graphics algorithms, we are able to create virtual worlds and, of course, within those virtual worlds, virtual humans too. So here is a wacky idea. If we have all this virtual data, why not use these instead of real photos to train new neural networks? Hmm, wait a second. Maybe this idea is not so wacky after all, especially because we can generate as many virtual humans as we wish, and all this data is perfectly annotated. The location and shape of the eyebrows is known even when they are occluded, and we know the depth and geometry of every single hair strand of the beard. If done well, there will be no issues about the identity of the subjects or the distribution of the data. Also, we are not limited by our wardrobe or the environments we have access to. In this virtual world, we can do anything we wish. So good. And of course, here is the ultimate question that decides the fate of this project, and that question is, does this work? Now we can use all this data to train a neural network, and the crazy thing about this is that this neural network never saw a real human. So here is the ultimate test, videos of real humans. Now hold onto your papers, and let's see if the little AI can label the image and find the important landmarks. Wow! I can hardly believe my eyes. It not only does it for a still image, but for even a video, and it is so accurate from frame to frame that no flickering artifacts emerge. That is outstanding. And get this, the measurement say that it can stand up to other state-of-the-art detector neural networks that were trained on real human faces. And... Oh my! Are you seeing what I am seeing? Can this really be? If we use the same neural network to learn on real human faces, it won't do better at all. In fact, it will do worse than the same AI with the virtual data. That is insanity. The advantages of this practically infinitely flexible synthetic dataset shows really well here. It is also possible to compare also discusses in detail that this only holds if we use the synthetic dataset well and include different rotations and lighting environments for the same photos. Something that is not always so easy in real environments. Now this test was called face parsing, and now comes landmark detection. This also works remarkably well, but wait a second, once again, are you seeing what I am seeing? The landmarks can rotate all they want, and it will know where they should be even if they are occluded by headphones, the hair, or even when they are not visible at all. Now of course, not even this technique is perfect, tracking the eyes correctly requires additional considerations and real data, but only for that. Which is relatively easy to produce. Also, the simulator at this point can only generate the head and the neck regions and nothing else. But, you know the drill, a couple papers down the line, and I am sure that this will be able to generate full human bodies. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karojola Ifehir."}, {"start": 5.0, "end": 10.44, "text": " None of these faces are real, and today we are going to find out whether these synthetic"}, {"start": 10.44, "end": 18.240000000000002, "text": " humans can, in a way, pass for real humans, but not quite in the sense that you might think."}, {"start": 18.240000000000002, "end": 23.28, "text": " Through the power of computer graphics algorithms, we are able to create virtual worlds and,"}, {"start": 23.28, "end": 27.560000000000002, "text": " of course, within those virtual worlds, virtual humans too."}, {"start": 27.56, "end": 30.36, "text": " So here is a wacky idea."}, {"start": 30.36, "end": 37.12, "text": " If we have all this virtual data, why not use these instead of real photos to train new neural"}, {"start": 37.12, "end": 38.12, "text": " networks?"}, {"start": 38.12, "end": 40.879999999999995, "text": " Hmm, wait a second."}, {"start": 40.879999999999995, "end": 46.68, "text": " Maybe this idea is not so wacky after all, especially because we can generate as many"}, {"start": 46.68, "end": 51.72, "text": " virtual humans as we wish, and all this data is perfectly annotated."}, {"start": 51.72, "end": 57.48, "text": " The location and shape of the eyebrows is known even when they are occluded, and we know"}, {"start": 57.48, "end": 62.16, "text": " the depth and geometry of every single hair strand of the beard."}, {"start": 62.16, "end": 67.6, "text": " If done well, there will be no issues about the identity of the subjects or the distribution"}, {"start": 67.6, "end": 68.6, "text": " of the data."}, {"start": 68.6, "end": 73.96000000000001, "text": " Also, we are not limited by our wardrobe or the environments we have access to."}, {"start": 73.96000000000001, "end": 77.56, "text": " In this virtual world, we can do anything we wish."}, {"start": 77.56, "end": 78.56, "text": " So good."}, {"start": 78.56, "end": 83.64, "text": " And of course, here is the ultimate question that decides the fate of this project, and"}, {"start": 83.64, "end": 86.68, "text": " that question is, does this work?"}, {"start": 86.68, "end": 92.48, "text": " Now we can use all this data to train a neural network, and the crazy thing about this is"}, {"start": 92.48, "end": 96.36, "text": " that this neural network never saw a real human."}, {"start": 96.36, "end": 101.28, "text": " So here is the ultimate test, videos of real humans."}, {"start": 101.28, "end": 107.76, "text": " Now hold onto your papers, and let's see if the little AI can label the image and find"}, {"start": 107.76, "end": 109.28, "text": " the important landmarks."}, {"start": 109.28, "end": 110.28, "text": " Wow!"}, {"start": 110.28, "end": 112.48, "text": " I can hardly believe my eyes."}, {"start": 112.48, "end": 118.36, "text": " It not only does it for a still image, but for even a video, and it is so accurate from"}, {"start": 118.36, "end": 122.4, "text": " frame to frame that no flickering artifacts emerge."}, {"start": 122.4, "end": 124.56, "text": " That is outstanding."}, {"start": 124.56, "end": 129.92000000000002, "text": " And get this, the measurement say that it can stand up to other state-of-the-art detector"}, {"start": 129.92000000000002, "end": 134.16, "text": " neural networks that were trained on real human faces."}, {"start": 134.16, "end": 135.16, "text": " And..."}, {"start": 135.16, "end": 136.16, "text": " Oh my!"}, {"start": 136.16, "end": 138.79999999999998, "text": " Are you seeing what I am seeing?"}, {"start": 138.79999999999998, "end": 140.4, "text": " Can this really be?"}, {"start": 140.4, "end": 146.04, "text": " If we use the same neural network to learn on real human faces, it won't do better at"}, {"start": 146.04, "end": 147.04, "text": " all."}, {"start": 147.04, "end": 153.28, "text": " In fact, it will do worse than the same AI with the virtual data."}, {"start": 153.28, "end": 154.92, "text": " That is insanity."}, {"start": 154.92, "end": 160.76, "text": " The advantages of this practically infinitely flexible synthetic dataset shows really well"}, {"start": 160.76, "end": 161.76, "text": " here."}, {"start": 161.76, "end": 166.48, "text": " It is also possible to compare also discusses in detail that this only holds if we use the"}, {"start": 166.48, "end": 172.23999999999998, "text": " synthetic dataset well and include different rotations and lighting environments for the"}, {"start": 172.23999999999998, "end": 174.04, "text": " same photos."}, {"start": 174.04, "end": 177.76, "text": " Something that is not always so easy in real environments."}, {"start": 177.76, "end": 183.0, "text": " Now this test was called face parsing, and now comes landmark detection."}, {"start": 183.0, "end": 189.6, "text": " This also works remarkably well, but wait a second, once again, are you seeing what I am"}, {"start": 189.6, "end": 190.6, "text": " seeing?"}, {"start": 190.6, "end": 195.92, "text": " The landmarks can rotate all they want, and it will know where they should be even if"}, {"start": 195.92, "end": 201.84, "text": " they are occluded by headphones, the hair, or even when they are not visible at all."}, {"start": 201.84, "end": 208.04, "text": " Now of course, not even this technique is perfect, tracking the eyes correctly requires additional"}, {"start": 208.04, "end": 213.0, "text": " considerations and real data, but only for that."}, {"start": 213.0, "end": 215.28, "text": " Which is relatively easy to produce."}, {"start": 215.28, "end": 221.0, "text": " Also, the simulator at this point can only generate the head and the neck regions and"}, {"start": 221.0, "end": 222.0, "text": " nothing else."}, {"start": 222.0, "end": 227.2, "text": " But, you know the drill, a couple papers down the line, and I am sure that this will be"}, {"start": 227.2, "end": 230.32, "text": " able to generate full human bodies."}, {"start": 230.32, "end": 233.76, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 233.76, "end": 239.72, "text": " If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 239.72, "end": 247.64, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your"}, {"start": 247.64, "end": 254.12, "text": " papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 254.12, "end": 259.6, "text": " Plus they are the only Cloud service with 48GB RTX 8000."}, {"start": 259.6, "end": 265.88, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances"}, {"start": 265.88, "end": 267.8, "text": " workstations or servers."}, {"start": 267.8, "end": 273.8, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 273.8, "end": 274.8, "text": " today."}, {"start": 274.8, "end": 279.28000000000003, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 279.28000000000003, "end": 280.28000000000003, "text": " for you."}, {"start": 280.28, "end": 307.03999999999996, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=B-zxoJ9o7s0
Google’s New AI: This is Where Selfies Go Hyper! 🤳
❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The paper "A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields" is available here: https://hypernerf.github.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are not going to create selfies, we are going to create Nerfis instead. What are those? Well, Nerfis are selfies from the future. The goal here is to take a selfie video and turn it into a portrait that we can rotate around in complete freedom. The technique appeared in November 2020 a little more than a year ago, and as you see, easily outperformed its predecessors. It shows a lot of strength in these examples and seems nearly invincible. However, there is a problem. What is the problem? Well, it still did not do all that well on moving things. Don't believe it. Let's try it out. This is the input video. Aha, there we go. As soon as there is a little movement in time, we see that this looks like a researcher who is about to have a powerful paper deadline experience. I wonder what the Nerfis technique will do with this. Uh-oh, that's not optimal. And now, let's see if this new method called HyperNurf is able to salvage the situation. Oh my, one moment perfectly frozen in time and we can also move around with the camera. Kind of like the bullet time effect from the matrix. Sensational. What is also sensational is that, of course, you season fellow scholars immediately ask, okay, but which moment will get frozen in time? The one with the mouth closed or open? And HyperNurf says, well, you tell me. Yes, we can even choose which moment we should freeze in time by exploring this thing that they call the HyperSpace. Hence the name HyperNurf. The process looks like this. So, if this can handle animations, well then, let's give it some real tough animations. This is one of my favorite examples where we can even make a video of coffee being made. Yes, that is indeed the true paper deadline experience. And a key to creating these nerfies correctly is getting this depth map right. Here, the colors describe how four things are from the camera. That is challenging in and of itself and now imagine that the camera is moving all the time. And not only that, but the subject of the scene is also moving all the time too. This is very challenging and this technique does it just right. The previous nerfie technique from just a year ago had a great deal of trouble with this chocolate melting scene. Look, tons of artifacts and deformations as we move the camera. Now, hold on to your papers and let's see if the new method can do even this. Now that would be something. And wow, outstanding. I also loved how the authors, when the extra mile with the presentation of their website, look, all the authors are there animated with this very method, putting their money where their papers are, loving it. So with that, there you go. These are nerfies, selfies from the future. And finally, they really work. Such amazing improvements in approximately a year. The pace of progress in AR research is nothing short of amazing. What a time to be alive. This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more. Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 10.4, "text": " Today, we are not going to create selfies, we are going to create Nerfis instead."}, {"start": 10.4, "end": 12.0, "text": " What are those?"}, {"start": 12.0, "end": 15.6, "text": " Well, Nerfis are selfies from the future."}, {"start": 15.6, "end": 21.92, "text": " The goal here is to take a selfie video and turn it into a portrait that we can rotate around"}, {"start": 21.92, "end": 23.52, "text": " in complete freedom."}, {"start": 23.52, "end": 28.560000000000002, "text": " The technique appeared in November 2020 a little more than a year ago,"}, {"start": 28.56, "end": 32.8, "text": " and as you see, easily outperformed its predecessors."}, {"start": 32.8, "end": 38.16, "text": " It shows a lot of strength in these examples and seems nearly invincible."}, {"start": 38.16, "end": 40.72, "text": " However, there is a problem."}, {"start": 40.72, "end": 41.92, "text": " What is the problem?"}, {"start": 41.92, "end": 46.16, "text": " Well, it still did not do all that well on moving things."}, {"start": 46.16, "end": 47.2, "text": " Don't believe it."}, {"start": 47.2, "end": 48.56, "text": " Let's try it out."}, {"start": 48.56, "end": 50.56, "text": " This is the input video."}, {"start": 50.56, "end": 52.8, "text": " Aha, there we go."}, {"start": 52.8, "end": 55.599999999999994, "text": " As soon as there is a little movement in time,"}, {"start": 55.6, "end": 62.08, "text": " we see that this looks like a researcher who is about to have a powerful paper deadline experience."}, {"start": 62.08, "end": 65.52, "text": " I wonder what the Nerfis technique will do with this."}, {"start": 66.16, "end": 68.08, "text": " Uh-oh, that's not optimal."}, {"start": 68.8, "end": 74.72, "text": " And now, let's see if this new method called HyperNurf is able to salvage the situation."}, {"start": 75.44, "end": 82.08, "text": " Oh my, one moment perfectly frozen in time and we can also move around with the camera."}, {"start": 82.08, "end": 85.6, "text": " Kind of like the bullet time effect from the matrix."}, {"start": 85.6, "end": 87.36, "text": " Sensational."}, {"start": 87.36, "end": 90.08, "text": " What is also sensational is that, of course,"}, {"start": 90.08, "end": 94.24, "text": " you season fellow scholars immediately ask, okay,"}, {"start": 94.24, "end": 97.44, "text": " but which moment will get frozen in time?"}, {"start": 97.44, "end": 100.56, "text": " The one with the mouth closed or open?"}, {"start": 101.2, "end": 104.4, "text": " And HyperNurf says, well, you tell me."}, {"start": 105.03999999999999, "end": 108.88, "text": " Yes, we can even choose which moment we should freeze in time"}, {"start": 108.88, "end": 112.88, "text": " by exploring this thing that they call the HyperSpace."}, {"start": 112.88, "end": 114.96, "text": " Hence the name HyperNurf."}, {"start": 114.96, "end": 116.56, "text": " The process looks like this."}, {"start": 117.19999999999999, "end": 119.75999999999999, "text": " So, if this can handle animations,"}, {"start": 119.75999999999999, "end": 123.11999999999999, "text": " well then, let's give it some real tough animations."}, {"start": 123.11999999999999, "end": 129.51999999999998, "text": " This is one of my favorite examples where we can even make a video of coffee being made."}, {"start": 129.51999999999998, "end": 133.04, "text": " Yes, that is indeed the true paper deadline experience."}, {"start": 133.04, "end": 138.64, "text": " And a key to creating these nerfies correctly is getting this depth map right."}, {"start": 139.2, "end": 143.28, "text": " Here, the colors describe how four things are from the camera."}, {"start": 143.84, "end": 150.64, "text": " That is challenging in and of itself and now imagine that the camera is moving all the time."}, {"start": 151.12, "end": 156.48, "text": " And not only that, but the subject of the scene is also moving all the time too."}, {"start": 157.2, "end": 161.44, "text": " This is very challenging and this technique does it just right."}, {"start": 161.44, "end": 164.64, "text": " The previous nerfie technique from just a year ago"}, {"start": 164.64, "end": 168.0, "text": " had a great deal of trouble with this chocolate melting scene."}, {"start": 168.0, "end": 172.32, "text": " Look, tons of artifacts and deformations as we move the camera."}, {"start": 172.88, "end": 178.24, "text": " Now, hold on to your papers and let's see if the new method can do even this."}, {"start": 178.24, "end": 180.16, "text": " Now that would be something."}, {"start": 180.16, "end": 183.68, "text": " And wow, outstanding."}, {"start": 184.48, "end": 189.84, "text": " I also loved how the authors, when the extra mile with the presentation of their website,"}, {"start": 189.84, "end": 194.4, "text": " look, all the authors are there animated with this very method,"}, {"start": 194.4, "end": 197.76, "text": " putting their money where their papers are, loving it."}, {"start": 198.08, "end": 200.24, "text": " So with that, there you go."}, {"start": 200.24, "end": 203.6, "text": " These are nerfies, selfies from the future."}, {"start": 203.6, "end": 206.24, "text": " And finally, they really work."}, {"start": 206.24, "end": 210.24, "text": " Such amazing improvements in approximately a year."}, {"start": 210.24, "end": 214.24, "text": " The pace of progress in AR research is nothing short of amazing."}, {"start": 214.24, "end": 216.0, "text": " What a time to be alive."}, {"start": 216.0, "end": 219.76, "text": " This video has been supported by weights and biases."}, {"start": 219.76, "end": 220.8, "text": " Look at this."}, {"start": 220.8, "end": 227.44, "text": " They have a great community forum that aims to make you the best machine learning engineer you can be."}, {"start": 227.44, "end": 231.68, "text": " You see, I always get messages from you fellow scholars telling me"}, {"start": 231.68, "end": 234.32, "text": " that you have been inspired by the series,"}, {"start": 234.32, "end": 237.36, "text": " but don't really know where to start."}, {"start": 237.36, "end": 238.88, "text": " And here it is."}, {"start": 238.88, "end": 242.72, "text": " In this forum, you can share your projects, ask for advice,"}, {"start": 242.72, "end": 245.52, "text": " look for collaborators and more."}, {"start": 245.52, "end": 250.64000000000001, "text": " Make sure to visit www.me-slash-paper-forum"}, {"start": 250.64000000000001, "end": 254.72, "text": " and say hi or just click the link in the video description."}, {"start": 254.72, "end": 258.40000000000003, "text": " Our thanks to weights and biases for their long-standing support"}, {"start": 258.40000000000003, "end": 261.2, "text": " and for helping us make better videos for you."}, {"start": 261.2, "end": 263.44, "text": " Thanks for watching and for your generous support."}, {"start": 263.44, "end": 279.36, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=szOMIn0YyUM
Ubisoft’s New AI Predicts the Future of Virtual Characters! 🐺
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "SuperTrack – Motion Tracking for Physically Simulated Characters using Supervised Learning" is available here: https://static-wordpress.akamaized.net/montreal.ubisoft.com/wp-content/uploads/2021/11/24183638/SuperTrack.pdf https://montreal.ubisoft.com/en/supertrack-motion-tracking-for-physically-simulated-characters-using-supervised-learning/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to take an AI and use it to synthesize these beautiful, crisp movement types. And you will see in a moment that this can do much, much more. So, how does this process work? There are similar previous techniques that took a big soup of motion-captured data and outweighed each other on what they could learn from it. And they did it really well. For instance, one of these AI's was able to not only learn these movements, but even improve them and even better adapt them to different kinds of terrains. This other work used a small training set of general movements to reinvent a popular high-jump technique, the Fosbury flop, by itself. This allows the athlete to jump backward over the bar, thus lowering their center of gravity. And it could also do it on Mars. How cool is that? But this new paper takes a different vantage point. Instead of asking for more training videos, it seeks to settle with less. But first, let's see what it does. Yes, we see that this new AI can match reference movements well, but that's not all of it. Not even close. The hallmark of a good AI is not being restricted to just a few movements, but being able to synthesize a great variety of different motions. So, can it do that? Wow, that is a ton of different kinds of motions, and the AI always seems to match the reference motions really well across the board. Bravo, we'll talk more about what that means in a moment. And here comes the best part. It does not only generalize to a variety of motions, but to a variety of body types as well. And we can bring these body types to different virtual environments too. This really seems like the whole package. And it gets better because we can control them in real time. My goodness. So, how does all this work? Let's see. Yes, here the green is the target movement we would like to achieve, and the yellow is the AI's result. Now, here the trails represent the past. So, how close are they? Well, of course, we don't know exactly yet. So, let's line them up. And now we're talking. They are almost the same. But wait, does this even make sense? Aren't we just inventing a copying machine here? What is so interesting about being able to copy an already existing movement? Well, no, no, no. We are not copying here, not even close. What this new work does instead is that we give it an initial pose and ask it to predict the future. In particular, we ask what is about to happen to this model? And the result is a message trail. So, what does this mess mean? Well, actually, the mess is great news. This means that the true physics results and the AI predictions line up so well that they almost completely cover each other. This is not the first technique to attempt this. What about previous methods? Can they also do this? Well, these are all doing pretty good. Maybe this new work is not that big of an improvement. Wait a second. Oh boy, one contestant is down. And now, two have failed. And I love how they still keep dancing while down. A plus for effort, little AI's, but the third one is still in the game. Careful, Ouch. Yup, the new method is absolutely amazing. No question about it. And, of course, do not be fooled by these mannequins. These can be mapped to real characters in real video games too. So, this amazing new method is able to create higher quality animations, let us grab a controller and play with them and also requires a shorter training time. Not only that, but the new method predicts more and hence relies much less on the motion dataset we feed it and therefore it is also less sensitive to its flaws. I love this. A solid step towards democratizing the creation of superb computer animations. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB, RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.76, "end": 12.16, "text": " Today we are going to take an AI and use it to synthesize these beautiful, crisp movement types."}, {"start": 12.16, "end": 16.68, "text": " And you will see in a moment that this can do much, much more."}, {"start": 16.68, "end": 19.0, "text": " So, how does this process work?"}, {"start": 19.0, "end": 24.0, "text": " There are similar previous techniques that took a big soup of motion-captured data"}, {"start": 24.0, "end": 28.0, "text": " and outweighed each other on what they could learn from it."}, {"start": 28.0, "end": 30.4, "text": " And they did it really well."}, {"start": 30.4, "end": 35.2, "text": " For instance, one of these AI's was able to not only learn these movements,"}, {"start": 35.2, "end": 42.4, "text": " but even improve them and even better adapt them to different kinds of terrains."}, {"start": 42.4, "end": 49.6, "text": " This other work used a small training set of general movements to reinvent a popular high-jump technique,"}, {"start": 49.6, "end": 52.8, "text": " the Fosbury flop, by itself."}, {"start": 52.8, "end": 59.599999999999994, "text": " This allows the athlete to jump backward over the bar, thus lowering their center of gravity."}, {"start": 59.599999999999994, "end": 62.4, "text": " And it could also do it on Mars."}, {"start": 62.4, "end": 64.39999999999999, "text": " How cool is that?"}, {"start": 64.39999999999999, "end": 68.0, "text": " But this new paper takes a different vantage point."}, {"start": 68.0, "end": 73.4, "text": " Instead of asking for more training videos, it seeks to settle with less."}, {"start": 73.4, "end": 76.2, "text": " But first, let's see what it does."}, {"start": 76.2, "end": 81.0, "text": " Yes, we see that this new AI can match reference movements well,"}, {"start": 81.0, "end": 84.2, "text": " but that's not all of it. Not even close."}, {"start": 84.2, "end": 89.4, "text": " The hallmark of a good AI is not being restricted to just a few movements,"}, {"start": 89.4, "end": 94.2, "text": " but being able to synthesize a great variety of different motions."}, {"start": 94.2, "end": 96.8, "text": " So, can it do that?"}, {"start": 96.8, "end": 100.6, "text": " Wow, that is a ton of different kinds of motions,"}, {"start": 100.6, "end": 106.8, "text": " and the AI always seems to match the reference motions really well across the board."}, {"start": 106.8, "end": 110.6, "text": " Bravo, we'll talk more about what that means in a moment."}, {"start": 110.6, "end": 112.8, "text": " And here comes the best part."}, {"start": 112.8, "end": 116.19999999999999, "text": " It does not only generalize to a variety of motions,"}, {"start": 116.19999999999999, "end": 120.0, "text": " but to a variety of body types as well."}, {"start": 120.0, "end": 124.6, "text": " And we can bring these body types to different virtual environments too."}, {"start": 124.6, "end": 127.19999999999999, "text": " This really seems like the whole package."}, {"start": 127.19999999999999, "end": 132.2, "text": " And it gets better because we can control them in real time."}, {"start": 132.2, "end": 133.6, "text": " My goodness."}, {"start": 133.6, "end": 136.2, "text": " So, how does all this work?"}, {"start": 136.2, "end": 137.4, "text": " Let's see."}, {"start": 137.4, "end": 142.0, "text": " Yes, here the green is the target movement we would like to achieve,"}, {"start": 142.0, "end": 144.8, "text": " and the yellow is the AI's result."}, {"start": 144.8, "end": 148.4, "text": " Now, here the trails represent the past."}, {"start": 148.4, "end": 150.8, "text": " So, how close are they?"}, {"start": 150.8, "end": 153.8, "text": " Well, of course, we don't know exactly yet."}, {"start": 153.8, "end": 156.20000000000002, "text": " So, let's line them up."}, {"start": 156.20000000000002, "end": 158.8, "text": " And now we're talking."}, {"start": 158.8, "end": 160.8, "text": " They are almost the same."}, {"start": 160.8, "end": 163.20000000000002, "text": " But wait, does this even make sense?"}, {"start": 163.20000000000002, "end": 166.20000000000002, "text": " Aren't we just inventing a copying machine here?"}, {"start": 166.2, "end": 171.2, "text": " What is so interesting about being able to copy an already existing movement?"}, {"start": 171.2, "end": 173.2, "text": " Well, no, no, no."}, {"start": 173.2, "end": 176.2, "text": " We are not copying here, not even close."}, {"start": 176.2, "end": 180.6, "text": " What this new work does instead is that we give it an initial pose"}, {"start": 180.6, "end": 183.39999999999998, "text": " and ask it to predict the future."}, {"start": 183.39999999999998, "end": 187.39999999999998, "text": " In particular, we ask what is about to happen to this model?"}, {"start": 187.39999999999998, "end": 190.79999999999998, "text": " And the result is a message trail."}, {"start": 190.79999999999998, "end": 193.2, "text": " So, what does this mess mean?"}, {"start": 193.2, "end": 196.6, "text": " Well, actually, the mess is great news."}, {"start": 196.6, "end": 202.79999999999998, "text": " This means that the true physics results and the AI predictions line up so well"}, {"start": 202.79999999999998, "end": 206.0, "text": " that they almost completely cover each other."}, {"start": 206.0, "end": 208.79999999999998, "text": " This is not the first technique to attempt this."}, {"start": 208.79999999999998, "end": 210.79999999999998, "text": " What about previous methods?"}, {"start": 210.79999999999998, "end": 212.6, "text": " Can they also do this?"}, {"start": 212.6, "end": 215.2, "text": " Well, these are all doing pretty good."}, {"start": 215.2, "end": 218.79999999999998, "text": " Maybe this new work is not that big of an improvement."}, {"start": 218.79999999999998, "end": 220.39999999999998, "text": " Wait a second."}, {"start": 220.4, "end": 223.6, "text": " Oh boy, one contestant is down."}, {"start": 223.6, "end": 225.8, "text": " And now, two have failed."}, {"start": 225.8, "end": 229.6, "text": " And I love how they still keep dancing while down."}, {"start": 229.6, "end": 235.8, "text": " A plus for effort, little AI's, but the third one is still in the game."}, {"start": 235.8, "end": 237.8, "text": " Careful, Ouch."}, {"start": 237.8, "end": 241.0, "text": " Yup, the new method is absolutely amazing."}, {"start": 241.0, "end": 242.8, "text": " No question about it."}, {"start": 242.8, "end": 246.20000000000002, "text": " And, of course, do not be fooled by these mannequins."}, {"start": 246.2, "end": 250.6, "text": " These can be mapped to real characters in real video games too."}, {"start": 250.6, "end": 255.6, "text": " So, this amazing new method is able to create higher quality animations,"}, {"start": 255.6, "end": 261.4, "text": " let us grab a controller and play with them and also requires a shorter training time."}, {"start": 261.4, "end": 264.4, "text": " Not only that, but the new method predicts more"}, {"start": 264.4, "end": 269.2, "text": " and hence relies much less on the motion dataset we feed it"}, {"start": 269.2, "end": 272.8, "text": " and therefore it is also less sensitive to its flaws."}, {"start": 272.8, "end": 274.4, "text": " I love this."}, {"start": 274.4, "end": 280.2, "text": " A solid step towards democratizing the creation of superb computer animations."}, {"start": 280.2, "end": 282.2, "text": " What a time to be alive."}, {"start": 282.2, "end": 285.79999999999995, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 285.79999999999995, "end": 289.4, "text": " If you're looking for inexpensive Cloud GPUs for AI,"}, {"start": 289.4, "end": 291.79999999999995, "text": " check out Lambda GPU Cloud."}, {"start": 291.79999999999995, "end": 298.59999999999997, "text": " They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances."}, {"start": 298.59999999999997, "end": 302.2, "text": " And hold onto your papers because Lambda GPU Cloud"}, {"start": 302.2, "end": 306.2, "text": " can cost less than half of AWS and Azure."}, {"start": 306.2, "end": 311.4, "text": " Plus, they are the only Cloud service with 48GB, RTX 8000."}, {"start": 311.4, "end": 316.0, "text": " Join researchers at organizations like Apple, MIT and Caltech"}, {"start": 316.0, "end": 319.8, "text": " in using Lambda Cloud instances, workstations or servers."}, {"start": 319.8, "end": 323.8, "text": " Make sure to go to LambdaLabs.com slash papers to sign up"}, {"start": 323.8, "end": 326.59999999999997, "text": " for one of their amazing GPU instances today."}, {"start": 326.59999999999997, "end": 329.3, "text": " Our thanks to Lambda for their longstanding support"}, {"start": 329.3, "end": 332.0, "text": " and for helping us make better videos for you."}, {"start": 332.0, "end": 334.2, "text": " Thanks for watching and for your generous support."}, {"start": 334.2, "end": 364.0, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=EFiUalqhWDc
Yes, These Are Virtual Dumplings! 🥟
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "Guaranteed Globally Injective 3D Deformation Processing" is available here: https://ipc-sim.github.io/IDP/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai Fahir. Today we are going to absolutely destroy this virtual bunny, and then inflate this cat until it becomes a bit of a chunker. And I hope you like dumplings because we'll also be able to start a cooking show in a virtual world. For instance, we can now squash and wrap and pinch and squeeze. And this is a computer graphics paper, so let's smash these things for good measure. There we go. Add some more, and the meal is now ready. Enjoy! This is all possible through this new paper that is capable of creating physics simulations with drastic geometry changes. And when I say drastic, I really mean it. And looking through the results, this paper feels like it can do absolutely anything. It promises a technique called injective deformation processing. Well, what does all this mean? It means great news. Finally, when we do these crazy experiments, things don't turn inside out, and the geometry does not overlap. Wanna see what those overlaps look like? Well, here it is. And luckily, we don't have to worry about this phenomenon with this new technique. Not only when inflating, but it also works correctly when we start deflating things. Now, talking about overlaps, let's see this animation sequence simulated with a previous method. Oh no, the belly is not moving, and my goodness, look at that. It gets even worse. What is even worse than a belly that does not move? Of course, intersection artifacts. Now, what you will see is not a resimulation of this experiment from scratch with this new method. No, no, even better. We give this flat simulation to the no technique, and yes, it can even repair it. Wow, an absolute miracle. No more self-intersections, and finally, the belly is moving around in a realistic manner. And while we find out whether this armadillo simulated with the new method is dabbing, or if it is just shy, let's talk about how long we have to wait for a simulation like this. All nighters? No, not at all. The font inflation examples roughly take half a second per frame, that is unbelievable, and it goes up to 12 minutes per frame, which is required for the larger deformation experiments. And, repairing an already existing flat simulation also takes a similar amount of time. So, what about this armadillo? Yes, that is definitely a shy armadillo. So, from now on, we can apply drastic geometry changes in our virtual worlds, and I'm sure that two more papers down the line, all this will be possible in real time. Real time dumplings in a virtual world. Yes, please, sign me up. What a time to be alive. Wates and biases provide tools to track your experiments in your deep learning projects. What you see here is their amazing sweeps feature, which helps you find and reproduce your best runs, and even better, what made this particular run the best. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And, the best part is that Wates and Biasis is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wates and Biasis for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai Fahir."}, {"start": 4.64, "end": 8.8, "text": " Today we are going to absolutely destroy this virtual bunny,"}, {"start": 9.6, "end": 14.48, "text": " and then inflate this cat until it becomes a bit of a chunker."}, {"start": 14.48, "end": 21.52, "text": " And I hope you like dumplings because we'll also be able to start a cooking show in a virtual world."}, {"start": 21.52, "end": 27.92, "text": " For instance, we can now squash and wrap and pinch and squeeze."}, {"start": 27.92, "end": 33.28, "text": " And this is a computer graphics paper, so let's smash these things for good measure."}, {"start": 34.0, "end": 39.120000000000005, "text": " There we go. Add some more, and the meal is now ready. Enjoy!"}, {"start": 39.84, "end": 45.68000000000001, "text": " This is all possible through this new paper that is capable of creating physics simulations"}, {"start": 45.68000000000001, "end": 51.2, "text": " with drastic geometry changes. And when I say drastic, I really mean it."}, {"start": 51.2, "end": 57.120000000000005, "text": " And looking through the results, this paper feels like it can do absolutely anything."}, {"start": 57.12, "end": 61.44, "text": " It promises a technique called injective deformation processing."}, {"start": 61.44, "end": 65.75999999999999, "text": " Well, what does all this mean? It means great news."}, {"start": 65.75999999999999, "end": 71.03999999999999, "text": " Finally, when we do these crazy experiments, things don't turn inside out,"}, {"start": 71.03999999999999, "end": 73.84, "text": " and the geometry does not overlap."}, {"start": 73.84, "end": 78.0, "text": " Wanna see what those overlaps look like? Well, here it is."}, {"start": 78.0, "end": 82.56, "text": " And luckily, we don't have to worry about this phenomenon with this new technique."}, {"start": 82.56, "end": 88.32000000000001, "text": " Not only when inflating, but it also works correctly when we start deflating things."}, {"start": 89.28, "end": 96.0, "text": " Now, talking about overlaps, let's see this animation sequence simulated with a previous method."}, {"start": 97.12, "end": 104.88, "text": " Oh no, the belly is not moving, and my goodness, look at that. It gets even worse."}, {"start": 105.44, "end": 110.88, "text": " What is even worse than a belly that does not move? Of course, intersection artifacts."}, {"start": 110.88, "end": 117.75999999999999, "text": " Now, what you will see is not a resimulation of this experiment from scratch with this new method."}, {"start": 118.39999999999999, "end": 126.88, "text": " No, no, even better. We give this flat simulation to the no technique, and yes, it can even repair it."}, {"start": 127.52, "end": 136.32, "text": " Wow, an absolute miracle. No more self-intersections, and finally, the belly is moving around in a realistic"}, {"start": 136.32, "end": 142.95999999999998, "text": " manner. And while we find out whether this armadillo simulated with the new method is dabbing,"}, {"start": 142.95999999999998, "end": 148.72, "text": " or if it is just shy, let's talk about how long we have to wait for a simulation like this."}, {"start": 148.72, "end": 156.56, "text": " All nighters? No, not at all. The font inflation examples roughly take half a second per frame,"}, {"start": 156.56, "end": 163.35999999999999, "text": " that is unbelievable, and it goes up to 12 minutes per frame, which is required for the larger"}, {"start": 163.36, "end": 169.92000000000002, "text": " deformation experiments. And, repairing an already existing flat simulation also takes a similar"}, {"start": 169.92000000000002, "end": 176.88000000000002, "text": " amount of time. So, what about this armadillo? Yes, that is definitely a shy armadillo."}, {"start": 177.44000000000003, "end": 183.36, "text": " So, from now on, we can apply drastic geometry changes in our virtual worlds, and I'm sure"}, {"start": 183.36, "end": 189.36, "text": " that two more papers down the line, all this will be possible in real time. Real time dumplings"}, {"start": 189.36, "end": 196.0, "text": " in a virtual world. Yes, please, sign me up. What a time to be alive. Wates and biases provide"}, {"start": 196.0, "end": 202.08, "text": " tools to track your experiments in your deep learning projects. What you see here is their amazing"}, {"start": 202.08, "end": 209.60000000000002, "text": " sweeps feature, which helps you find and reproduce your best runs, and even better, what made this"}, {"start": 209.60000000000002, "end": 216.64000000000001, "text": " particular run the best. It is used by many prestigious labs, including OpenAI, Toyota Research,"}, {"start": 216.64, "end": 223.27999999999997, "text": " GitHub, and more. And, the best part is that Wates and Biasis is free for all individuals,"}, {"start": 223.27999999999997, "end": 230.88, "text": " academics, and open source projects. Make sure to visit them through wnb.com slash papers,"}, {"start": 230.88, "end": 236.23999999999998, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 236.23999999999998, "end": 242.0, "text": " Our thanks to Wates and Biasis for their long-standing support, and for helping us make better"}, {"start": 242.0, "end": 249.6, "text": " videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MrKlPvWvQ2Q
NVIDIA’s New AI: Journey Into Virtual Reality!
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper "Physics-based Human Motion Estimation and Synthesis from Videos" is available here: https://nv-tlabs.github.io/physics-pose-estimation-project-page/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to see how Nvidia's crazy new AI is able to understand basically any human movement. So, how is this even possible? And even more importantly, what is pose estimation? Well, simple, a video of people goes in and the posture they are taking comes out. Now, you see here that previous techniques can already do this quite well. What's more, if we allow an AI to read the Wi-Fi signals bouncing around in the room, it can perform pose estimation even through walls. Kind of. Now, you may be wondering what is pose estimation good for? By the end of this video, you will see that this can help us move around in virtual worlds the metaverse, if you will. But, let's not rush there yet because still, there are several key challenges here. One, we need to be super accurate to even put a dent into this problem. Why? Well, because if we have a video and we are off by just a tiny bit from frame to frame, this kind of flickering may happen. That's a challenge. Two, foot sliding. Yes, you heard it right, previous methods suffer from this phenomenon. You can see it in action here. And also here too. So, why does this happen? It happens because the technique has no knowledge of the physics of real human movements. So, scientists at Nvidia, the University of Toronto and the Vector Institute fired up a collaboration and when I first heard about their concept, I thought you are doing what? But, check this out. First, they perform a regular pose estimation. Of course, this is no good as it has the dreaded temporal inconsistency or in other words flickering. And in other cases, often, foot sliding too. Now, hold on to your papers because here comes the magic. They transfer the motion to a video game character and embed that character in a physics simulation. In this virtual world, the motion can be corrected to make sure they are physically correct. Now, remember, foot sliding happens because of the lack of knowledge in physics. So, perhaps this idea is not that crazy after all. Let's have a look. Now this will be quite a challenge, explosive sprinting motions. And... Whoa! This is amazing. This, dear fellow scholars, is superb pose estimation and tracking. And, how about this? A good tennis serve includes lots of dynamic motion. And, just look at how beautifully it reconstructs this move. Apparently, physics works. Now, the output needn't be stickman. We can retarget these to proper textured virtual characters built from a triangle mesh. And, that is just one step away from us being able to appear in the metaphors. No head mounted displays are required, no expensive studio, and no motion capture equipment is required. So, what is required? Actually, nothing. Just the raw input video of us. That is insanity. And, all this produces physically correct motion. So, that crazy idea about taking people and transforming them into video game characters is not so crazy after all. So, now we are one step closer to be able to work and even have some coffee together in a virtual world. What a time to be alive. This video has been supported by weights and biases. And, being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But, I am not looking for data, I am looking for insights. And, weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link wnb.me slash paper intro, or just click the link in the video description. And, try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 4.64, "end": 13.200000000000001, "text": " Today we are going to see how Nvidia's crazy new AI is able to understand basically any human movement."}, {"start": 13.200000000000001, "end": 15.84, "text": " So, how is this even possible?"}, {"start": 15.84, "end": 20.080000000000002, "text": " And even more importantly, what is pose estimation?"}, {"start": 20.080000000000002, "end": 26.64, "text": " Well, simple, a video of people goes in and the posture they are taking comes out."}, {"start": 26.64, "end": 31.6, "text": " Now, you see here that previous techniques can already do this quite well."}, {"start": 31.6, "end": 37.84, "text": " What's more, if we allow an AI to read the Wi-Fi signals bouncing around in the room,"}, {"start": 37.84, "end": 42.24, "text": " it can perform pose estimation even through walls."}, {"start": 42.24, "end": 43.2, "text": " Kind of."}, {"start": 43.2, "end": 47.2, "text": " Now, you may be wondering what is pose estimation good for?"}, {"start": 47.2, "end": 52.400000000000006, "text": " By the end of this video, you will see that this can help us move around in virtual worlds"}, {"start": 52.400000000000006, "end": 54.24, "text": " the metaverse, if you will."}, {"start": 54.24, "end": 60.08, "text": " But, let's not rush there yet because still, there are several key challenges here."}, {"start": 60.72, "end": 65.68, "text": " One, we need to be super accurate to even put a dent into this problem."}, {"start": 66.4, "end": 67.36, "text": " Why?"}, {"start": 67.36, "end": 74.08, "text": " Well, because if we have a video and we are off by just a tiny bit from frame to frame,"}, {"start": 74.08, "end": 76.0, "text": " this kind of flickering may happen."}, {"start": 76.64, "end": 77.68, "text": " That's a challenge."}, {"start": 78.96000000000001, "end": 80.88, "text": " Two, foot sliding."}, {"start": 80.88, "end": 85.03999999999999, "text": " Yes, you heard it right, previous methods suffer from this phenomenon."}, {"start": 85.03999999999999, "end": 86.72, "text": " You can see it in action here."}, {"start": 87.44, "end": 88.72, "text": " And also here too."}, {"start": 89.44, "end": 91.44, "text": " So, why does this happen?"}, {"start": 91.44, "end": 97.28, "text": " It happens because the technique has no knowledge of the physics of real human movements."}, {"start": 97.28, "end": 103.6, "text": " So, scientists at Nvidia, the University of Toronto and the Vector Institute fired up a"}, {"start": 103.6, "end": 108.56, "text": " collaboration and when I first heard about their concept, I thought you are doing what?"}, {"start": 108.56, "end": 110.96000000000001, "text": " But, check this out."}, {"start": 110.96000000000001, "end": 114.0, "text": " First, they perform a regular pose estimation."}, {"start": 114.0, "end": 121.2, "text": " Of course, this is no good as it has the dreaded temporal inconsistency or in other words flickering."}, {"start": 121.2, "end": 124.96000000000001, "text": " And in other cases, often, foot sliding too."}, {"start": 124.96000000000001, "end": 129.52, "text": " Now, hold on to your papers because here comes the magic."}, {"start": 129.52, "end": 136.48000000000002, "text": " They transfer the motion to a video game character and embed that character in a physics simulation."}, {"start": 136.48, "end": 141.84, "text": " In this virtual world, the motion can be corrected to make sure they are physically correct."}, {"start": 141.84, "end": 147.44, "text": " Now, remember, foot sliding happens because of the lack of knowledge in physics."}, {"start": 147.44, "end": 152.23999999999998, "text": " So, perhaps this idea is not that crazy after all."}, {"start": 152.23999999999998, "end": 153.2, "text": " Let's have a look."}, {"start": 153.2, "end": 158.07999999999998, "text": " Now this will be quite a challenge, explosive sprinting motions."}, {"start": 158.07999999999998, "end": 158.88, "text": " And..."}, {"start": 160.48, "end": 160.95999999999998, "text": " Whoa!"}, {"start": 160.95999999999998, "end": 162.95999999999998, "text": " This is amazing."}, {"start": 162.96, "end": 168.56, "text": " This, dear fellow scholars, is superb pose estimation and tracking."}, {"start": 168.56, "end": 170.64000000000001, "text": " And, how about this?"}, {"start": 170.64000000000001, "end": 174.0, "text": " A good tennis serve includes lots of dynamic motion."}, {"start": 174.0, "end": 178.48000000000002, "text": " And, just look at how beautifully it reconstructs this move."}, {"start": 178.48000000000002, "end": 180.72, "text": " Apparently, physics works."}, {"start": 180.72, "end": 183.60000000000002, "text": " Now, the output needn't be stickman."}, {"start": 183.60000000000002, "end": 188.96, "text": " We can retarget these to proper textured virtual characters built from a triangle mesh."}, {"start": 188.96, "end": 195.28, "text": " And, that is just one step away from us being able to appear in the metaphors."}, {"start": 195.28, "end": 202.56, "text": " No head mounted displays are required, no expensive studio, and no motion capture equipment is required."}, {"start": 202.56, "end": 204.88, "text": " So, what is required?"}, {"start": 204.88, "end": 206.88, "text": " Actually, nothing."}, {"start": 206.88, "end": 209.28, "text": " Just the raw input video of us."}, {"start": 209.28, "end": 211.52, "text": " That is insanity."}, {"start": 211.52, "end": 215.20000000000002, "text": " And, all this produces physically correct motion."}, {"start": 215.2, "end": 221.28, "text": " So, that crazy idea about taking people and transforming them into video game characters"}, {"start": 221.28, "end": 223.51999999999998, "text": " is not so crazy after all."}, {"start": 223.51999999999998, "end": 231.67999999999998, "text": " So, now we are one step closer to be able to work and even have some coffee together in a virtual world."}, {"start": 231.67999999999998, "end": 233.51999999999998, "text": " What a time to be alive."}, {"start": 233.51999999999998, "end": 237.04, "text": " This video has been supported by weights and biases."}, {"start": 237.04, "end": 242.39999999999998, "text": " And, being a machine learning researcher means doing tons of experiments and, of course,"}, {"start": 242.39999999999998, "end": 244.79999999999998, "text": " creating tons of data."}, {"start": 244.8, "end": 249.28, "text": " But, I am not looking for data, I am looking for insights."}, {"start": 249.28, "end": 252.56, "text": " And, weights and biases helps with exactly that."}, {"start": 252.56, "end": 257.04, "text": " They have tools for experiment tracking, data set and model versioning,"}, {"start": 257.04, "end": 260.48, "text": " and even hyper parameter optimization."}, {"start": 260.48, "end": 264.96000000000004, "text": " No wonder this is the experiment tracking tool choice of open AI,"}, {"start": 264.96000000000004, "end": 269.52, "text": " Toyota Research, Samsung, and many more prestigious labs."}, {"start": 269.52, "end": 275.28, "text": " Make sure to use the link wnb.me slash paper intro,"}, {"start": 275.28, "end": 277.91999999999996, "text": " or just click the link in the video description."}, {"start": 277.91999999999996, "end": 284.4, "text": " And, try this 10 minute example of weights and biases today to experience the wonderful feeling"}, {"start": 284.4, "end": 289.03999999999996, "text": " of training a neural network and being in control of your experiments."}, {"start": 289.76, "end": 292.4, "text": " After you try it, you won't want to go back."}, {"start": 292.4, "end": 302.79999999999995, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=d3njVfnCdN0
From Mesh To Yarn... In Real Time! 🧶
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "Mechanics-Aware Deformation of Yarn Pattern Geometry" is available here: https://visualcomputing.ist.ac.at/publications/2021/MADYPG/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Wish to watch these videos in early access? Join us here: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. This is not some hot-nulled Two Minute Papers merchandise. This is what a new class simulation paper is capable of doing today. So good! And today we are also going to see how to put the holes back into Holy Geometry and do it quickly. But, wait a second, that is impossible. We know that yarn-based class simulations take forever to compute. Here is an example. We showcased this previous work approximately 60 episodes ago. And curiously, get this, some of the simulations here were nearly 60 times faster than the yarn-based reference simulation. That is amazing. However, we noted that even though this technique was a great leap forward, of course it wasn't perfect, there was a prize to be paid for this amazing speed, which was, look, the pulling effects on individual yarns were neglected. That amazing Holy Geometry is lost. I noted that I'd love to get that back. You were seen a moment that maybe my wish comes true today. Hmm, so quick rundown. We either go for a mesh-based class simulation. These are really fast, but no Holy Geometry. Or we choose the yarn-level simulations. These are the real deal. However, they take forever. Unfortunately, still, there seems to be no way out here. Now, let's have a look at this new technique. It promises to try to marry these two concepts. Or in other words, use a fast mesh simulator. And add the yarn-level cloth details on top of it. Now of course, that is easier said than done. So let's see how this new method can deal with this challenging problem. So this is the fast mesh simulation to start with. And now hold onto your papers and let's add those yarns. Oh my, that is beautiful. I absolutely love how at different points you can even see through these garments. Beautiful. So, how does it do all this magic? Well, look at this naive method. This neglects the proper tension between the yarns and look at how beautifully they tighten as we add the new method on top of it. This technique can do it properly. And not only that, but it also simulates how the garment interacts with a virtual body in a simulated world. Once again, note how the previous naive method neglects the tightening of the yarns. So I am super excited. Now let's see how long we have to wait for this. Are we talking fast mesh simulation timings or slow yarn simulation timings? My goodness, look at that. The mesh part runs on your processor while the yarn part of the simulation is implemented on the graphics card. And whoa, the whole thing runs in the order of milliseconds easily in real time. Even if we have a super detailed garment with tens of millions of vertices in the simulation, the timings are in the order of tens of milliseconds at worst. That's also super fast. Wow. So, yes, part of this runs on our graphics card and in real time. Here we have nearly a million vertices, not a problem. Here nearly two million vertices, two million points, if you will, and it runs like a dream. And it only starts to slow down when we go all the way to a super detailed piece of garment with 42 million vertices. And here there is so much detail that finally not the simulation technique is the bottleneck, but the light simulation algorithm is. Look, there are so many high frequency details that we would need higher resolution videos or more advanced anti-aliasing techniques to resolve all these details. All this means that the simulation technique did its job really well. So, finally, yarn level simulations in real time. Not a time to be alive. Huge congratulations to the entire team and the first author, Georg Spell, who is still a PhD student making important contributions like this. And get this, Georg's short presentation was seen by... Oh my, 61 people. Views are not everything. Not even close. But once again, if we don't talk about this work here, I am worried that almost no one will. This is why two minute papers exist. Subscribe if you wish to see more of these miracle papers we have some great ones coming up. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They've discussed biology, teaching robots, machine learning in outer space and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.44, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 5.44, "end": 9.0, "text": " This is not some hot-nulled Two Minute Papers merchandise."}, {"start": 9.0, "end": 14.200000000000001, "text": " This is what a new class simulation paper is capable of doing today."}, {"start": 14.200000000000001, "end": 15.56, "text": " So good!"}, {"start": 15.56, "end": 21.44, "text": " And today we are also going to see how to put the holes back into Holy Geometry and do"}, {"start": 21.44, "end": 22.44, "text": " it quickly."}, {"start": 22.44, "end": 26.52, "text": " But, wait a second, that is impossible."}, {"start": 26.52, "end": 31.32, "text": " We know that yarn-based class simulations take forever to compute."}, {"start": 31.32, "end": 32.68, "text": " Here is an example."}, {"start": 32.68, "end": 37.6, "text": " We showcased this previous work approximately 60 episodes ago."}, {"start": 37.6, "end": 44.480000000000004, "text": " And curiously, get this, some of the simulations here were nearly 60 times faster than the"}, {"start": 44.480000000000004, "end": 47.32, "text": " yarn-based reference simulation."}, {"start": 47.32, "end": 48.8, "text": " That is amazing."}, {"start": 48.8, "end": 54.879999999999995, "text": " However, we noted that even though this technique was a great leap forward, of course it wasn't"}, {"start": 54.88, "end": 61.56, "text": " perfect, there was a prize to be paid for this amazing speed, which was, look, the pulling"}, {"start": 61.56, "end": 65.68, "text": " effects on individual yarns were neglected."}, {"start": 65.68, "end": 68.76, "text": " That amazing Holy Geometry is lost."}, {"start": 68.76, "end": 71.52000000000001, "text": " I noted that I'd love to get that back."}, {"start": 71.52000000000001, "end": 76.12, "text": " You were seen a moment that maybe my wish comes true today."}, {"start": 76.12, "end": 78.72, "text": " Hmm, so quick rundown."}, {"start": 78.72, "end": 81.96000000000001, "text": " We either go for a mesh-based class simulation."}, {"start": 81.96, "end": 86.08, "text": " These are really fast, but no Holy Geometry."}, {"start": 86.08, "end": 89.44, "text": " Or we choose the yarn-level simulations."}, {"start": 89.44, "end": 90.96, "text": " These are the real deal."}, {"start": 90.96, "end": 93.16, "text": " However, they take forever."}, {"start": 93.16, "end": 97.44, "text": " Unfortunately, still, there seems to be no way out here."}, {"start": 97.44, "end": 100.44, "text": " Now, let's have a look at this new technique."}, {"start": 100.44, "end": 104.39999999999999, "text": " It promises to try to marry these two concepts."}, {"start": 104.39999999999999, "end": 108.88, "text": " Or in other words, use a fast mesh simulator."}, {"start": 108.88, "end": 112.92, "text": " And add the yarn-level cloth details on top of it."}, {"start": 112.92, "end": 115.96, "text": " Now of course, that is easier said than done."}, {"start": 115.96, "end": 120.75999999999999, "text": " So let's see how this new method can deal with this challenging problem."}, {"start": 120.75999999999999, "end": 125.44, "text": " So this is the fast mesh simulation to start with."}, {"start": 125.44, "end": 130.12, "text": " And now hold onto your papers and let's add those yarns."}, {"start": 130.12, "end": 133.56, "text": " Oh my, that is beautiful."}, {"start": 133.56, "end": 139.48, "text": " I absolutely love how at different points you can even see through these garments."}, {"start": 139.48, "end": 140.48, "text": " Beautiful."}, {"start": 140.48, "end": 143.52, "text": " So, how does it do all this magic?"}, {"start": 143.52, "end": 145.92000000000002, "text": " Well, look at this naive method."}, {"start": 145.92000000000002, "end": 151.32, "text": " This neglects the proper tension between the yarns and look at how beautifully they"}, {"start": 151.32, "end": 155.24, "text": " tighten as we add the new method on top of it."}, {"start": 155.24, "end": 158.04, "text": " This technique can do it properly."}, {"start": 158.04, "end": 163.4, "text": " And not only that, but it also simulates how the garment interacts with a virtual body"}, {"start": 163.4, "end": 167.84, "text": " in a simulated world."}, {"start": 167.84, "end": 174.12, "text": " Once again, note how the previous naive method neglects the tightening of the yarns."}, {"start": 174.12, "end": 176.44, "text": " So I am super excited."}, {"start": 176.44, "end": 179.48000000000002, "text": " Now let's see how long we have to wait for this."}, {"start": 179.48000000000002, "end": 186.52, "text": " Are we talking fast mesh simulation timings or slow yarn simulation timings?"}, {"start": 186.52, "end": 189.16, "text": " My goodness, look at that."}, {"start": 189.16, "end": 195.44, "text": " The mesh part runs on your processor while the yarn part of the simulation is implemented"}, {"start": 195.44, "end": 197.4, "text": " on the graphics card."}, {"start": 197.4, "end": 205.12, "text": " And whoa, the whole thing runs in the order of milliseconds easily in real time."}, {"start": 205.12, "end": 210.96, "text": " Even if we have a super detailed garment with tens of millions of vertices in the simulation,"}, {"start": 210.96, "end": 215.8, "text": " the timings are in the order of tens of milliseconds at worst."}, {"start": 215.8, "end": 218.44, "text": " That's also super fast."}, {"start": 218.44, "end": 219.44, "text": " Wow."}, {"start": 219.44, "end": 225.28, "text": " So, yes, part of this runs on our graphics card and in real time."}, {"start": 225.28, "end": 229.2, "text": " Here we have nearly a million vertices, not a problem."}, {"start": 229.2, "end": 236.04, "text": " Here nearly two million vertices, two million points, if you will, and it runs like a dream."}, {"start": 236.04, "end": 241.56, "text": " And it only starts to slow down when we go all the way to a super detailed piece of garment"}, {"start": 241.56, "end": 244.72, "text": " with 42 million vertices."}, {"start": 244.72, "end": 251.08, "text": " And here there is so much detail that finally not the simulation technique is the bottleneck,"}, {"start": 251.08, "end": 253.64, "text": " but the light simulation algorithm is."}, {"start": 253.64, "end": 259.72, "text": " Look, there are so many high frequency details that we would need higher resolution videos"}, {"start": 259.72, "end": 264.84, "text": " or more advanced anti-aliasing techniques to resolve all these details."}, {"start": 264.84, "end": 269.2, "text": " All this means that the simulation technique did its job really well."}, {"start": 269.2, "end": 273.92, "text": " So, finally, yarn level simulations in real time."}, {"start": 273.92, "end": 275.64000000000004, "text": " Not a time to be alive."}, {"start": 275.64000000000004, "end": 281.12, "text": " Huge congratulations to the entire team and the first author, Georg Spell, who is still"}, {"start": 281.12, "end": 285.72, "text": " a PhD student making important contributions like this."}, {"start": 285.72, "end": 289.72, "text": " And get this, Georg's short presentation was seen by..."}, {"start": 289.72, "end": 293.36, "text": " Oh my, 61 people."}, {"start": 293.36, "end": 295.24, "text": " Views are not everything."}, {"start": 295.24, "end": 296.72, "text": " Not even close."}, {"start": 296.72, "end": 301.68, "text": " But once again, if we don't talk about this work here, I am worried that almost no one"}, {"start": 301.68, "end": 302.68, "text": " will."}, {"start": 302.68, "end": 305.48, "text": " This is why two minute papers exist."}, {"start": 305.48, "end": 310.16, "text": " Subscribe if you wish to see more of these miracle papers we have some great ones coming"}, {"start": 310.16, "end": 311.16, "text": " up."}, {"start": 311.16, "end": 314.28000000000003, "text": " This video has been supported by weights and biases."}, {"start": 314.28000000000003, "end": 319.52, "text": " They have an amazing podcast by the name Gradient Descent where they interview machine learning"}, {"start": 319.52, "end": 325.8, "text": " experts who discuss how they use learning based algorithms to solve real world problems."}, {"start": 325.8, "end": 332.16, "text": " They've discussed biology, teaching robots, machine learning in outer space and a whole"}, {"start": 332.16, "end": 333.16, "text": " lot more."}, {"start": 333.16, "end": 336.36, "text": " Perfect for a fellow scholar with an open mind."}, {"start": 336.36, "end": 343.76000000000005, "text": " Make sure to visit them through wnb.me slash gd or just click the link in the video description."}, {"start": 343.76000000000005, "end": 348.6, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 348.6, "end": 350.04, "text": " better videos for you."}, {"start": 350.04, "end": 371.76, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=KZhoU_3k0Nk
Finally, This Table Cloth Pull is Now Possible! 🍽
❤️Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Codimensional Incremental Potential Contact (C-IPC)" is available here: https://ipc-sim.github.io/C-IPC/ Erratum: The cover page in the first frame of the video is from a previous paper. It should be pointing to this: https://ipc-sim.github.io/C-IPC/file/paper.pdf ❤️Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two minute papers with Dr. Kato Jona Ifehir. Today is a glorious day, because we are going to witness the first tableclothpull simulation I have ever seen. Well, I hoped that it would go a little more glorious than this. Maybe if we pull a bit quicker? Yes, there we go. And... we're good, loving it. Now, if you are a seasoned Fellow scholar, you might remember from about 50 videos ago, that we covered the predecessor of this paper called Incremental Potential Contact, IPC in short. So, what could it do? It could perform seriously impressive squishing experiments. And it also passed the tendril test where we threw a squishy ball at a glass wall and washed this process from the other side. A beautiful and rash side indeed. Unless you have a cat and a glass table at home, of course. Outstanding. So, I hear you asking, Karoi, are you trying to say that this new paper tops all that? Yes, that is exactly what I'm saying. The tableclothpulling is one thing, but it can do so much more. You can immediately start holding onto your papers and let's go. This new variant of IPC is capable of simulating super thin materials and all this in a penetration-free manner. Now, why is that so interesting or difficult? Well, that is quite a challenge. Remember this earlier paper with the barbarian ship, tons of penetration artifacts. And that is not even a thin object, not nearly as thin as this stack would be. Let's see what a previous simulation method would do if these are 10 millimeters each. That looks reasonable. Now, let's cut the thickness of the sheets in half. Yes, some bumpy artifacts appear and at one millimeter, my goodness, it's only getting worse. And when we plug in the same thin sheets into the new simulator, all of them look good. And what's more, they can be simulated together with other elastic objects without any issues. And this was a low-stress simulation. If we use the previous technique for a higher-stress simulation, this starts out well until, uh-oh, the thickness of the cloth is seriously decreasing over time. That is not realistic, but if we plug the same scene into the new technique, now that is realistic. So, what is all this good for? Well, if we wish to simulate a ball of noodles, tons of thick objects, let's see if we can hope for an intersection-free simulation. Let's look under the hood and there we go. All of the noodles are separated. But wait a second, I promised you thin objects. These are not thin. Yes, now these are thin. Still, no intersections. That is absolutely incredible. Other practical applications include simulating hair, braids in particular, granular materials against a thin sheet work too. And if you have been holding onto your paper so far, now squeeze that paper because the authors promise that we can even simulate this in-hand shuffling technique in a virtual world. Well, I will believe it when I see it. Let's see. My goodness, look at that. Love the attention to detail where the authors color-coded the left and right stack so we can better see how they mix and whether they intersect. Spoiler alert, they don't. What a time to be alive. It can also simulate this piece of cloth with a ton of detail, and not only that, with large time steps, which means that we can advance the time after each simulation step in bigger packets, thereby speeding up the execution time of the method. I also love how we get a better view of the geometry changes as the other side of the cloth has a different color. Once again, great attention to detail. Now we are still in the Minutes' Perfume region, and note that this runs on your processor. And therefore, if someone can implement this on the graphics card in a smart way, it could become close to real time in at most a couple of papers down the line. And this is a research paper that the authors give away to all of us free of charge. How cool is that? Thank you so much for creating these miracles and just giving them away for free. What a noble and ever research is. Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com, slash papers, and start using their system for free today. Our thanks to perceptilebs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Kato Jona Ifehir."}, {"start": 4.64, "end": 12.64, "text": " Today is a glorious day, because we are going to witness the first tableclothpull simulation I have ever seen."}, {"start": 12.64, "end": 17.36, "text": " Well, I hoped that it would go a little more glorious than this."}, {"start": 17.36, "end": 20.240000000000002, "text": " Maybe if we pull a bit quicker?"}, {"start": 20.240000000000002, "end": 22.16, "text": " Yes, there we go."}, {"start": 22.16, "end": 25.28, "text": " And... we're good, loving it."}, {"start": 25.28, "end": 30.64, "text": " Now, if you are a seasoned Fellow scholar, you might remember from about 50 videos ago,"}, {"start": 30.64, "end": 38.08, "text": " that we covered the predecessor of this paper called Incremental Potential Contact, IPC in short."}, {"start": 38.08, "end": 40.08, "text": " So, what could it do?"}, {"start": 40.08, "end": 45.120000000000005, "text": " It could perform seriously impressive squishing experiments."}, {"start": 45.120000000000005, "end": 50.72, "text": " And it also passed the tendril test where we threw a squishy ball at a glass wall"}, {"start": 50.72, "end": 54.56, "text": " and washed this process from the other side."}, {"start": 54.56, "end": 57.68, "text": " A beautiful and rash side indeed."}, {"start": 57.68, "end": 60.96, "text": " Unless you have a cat and a glass table at home, of course."}, {"start": 61.6, "end": 62.800000000000004, "text": " Outstanding."}, {"start": 62.800000000000004, "end": 69.12, "text": " So, I hear you asking, Karoi, are you trying to say that this new paper tops all that?"}, {"start": 69.92, "end": 72.48, "text": " Yes, that is exactly what I'm saying."}, {"start": 72.48, "end": 77.44, "text": " The tableclothpulling is one thing, but it can do so much more."}, {"start": 77.44, "end": 81.76, "text": " You can immediately start holding onto your papers and let's go."}, {"start": 81.76, "end": 90.48, "text": " This new variant of IPC is capable of simulating super thin materials and all this in a penetration-free manner."}, {"start": 90.48, "end": 94.4, "text": " Now, why is that so interesting or difficult?"}, {"start": 94.4, "end": 96.32000000000001, "text": " Well, that is quite a challenge."}, {"start": 96.32000000000001, "end": 101.76, "text": " Remember this earlier paper with the barbarian ship, tons of penetration artifacts."}, {"start": 101.76, "end": 107.76, "text": " And that is not even a thin object, not nearly as thin as this stack would be."}, {"start": 107.76, "end": 113.76, "text": " Let's see what a previous simulation method would do if these are 10 millimeters each."}, {"start": 113.76, "end": 115.36, "text": " That looks reasonable."}, {"start": 115.36, "end": 119.12, "text": " Now, let's cut the thickness of the sheets in half."}, {"start": 119.12, "end": 127.28, "text": " Yes, some bumpy artifacts appear and at one millimeter, my goodness, it's only getting worse."}, {"start": 127.28, "end": 131.68, "text": " And when we plug in the same thin sheets into the new simulator,"}, {"start": 131.68, "end": 133.28, "text": " all of them look good."}, {"start": 133.28, "end": 139.6, "text": " And what's more, they can be simulated together with other elastic objects without any issues."}, {"start": 140.4, "end": 142.96, "text": " And this was a low-stress simulation."}, {"start": 142.96, "end": 148.48, "text": " If we use the previous technique for a higher-stress simulation, this starts out well"}, {"start": 148.48, "end": 154.96, "text": " until, uh-oh, the thickness of the cloth is seriously decreasing over time."}, {"start": 155.68, "end": 161.2, "text": " That is not realistic, but if we plug the same scene into the new technique,"}, {"start": 161.2, "end": 163.44, "text": " now that is realistic."}, {"start": 163.44, "end": 166.07999999999998, "text": " So, what is all this good for?"}, {"start": 166.07999999999998, "end": 170.64, "text": " Well, if we wish to simulate a ball of noodles, tons of thick objects,"}, {"start": 170.64, "end": 174.48, "text": " let's see if we can hope for an intersection-free simulation."}, {"start": 175.04, "end": 178.79999999999998, "text": " Let's look under the hood and there we go."}, {"start": 178.79999999999998, "end": 180.88, "text": " All of the noodles are separated."}, {"start": 181.51999999999998, "end": 185.12, "text": " But wait a second, I promised you thin objects."}, {"start": 185.76, "end": 186.88, "text": " These are not thin."}, {"start": 186.88, "end": 191.76, "text": " Yes, now these are thin. Still, no intersections."}, {"start": 191.76, "end": 194.4, "text": " That is absolutely incredible."}, {"start": 194.4, "end": 198.0, "text": " Other practical applications include simulating hair,"}, {"start": 198.0, "end": 203.44, "text": " braids in particular, granular materials against a thin sheet work too."}, {"start": 203.44, "end": 206.64, "text": " And if you have been holding onto your paper so far,"}, {"start": 206.64, "end": 212.16, "text": " now squeeze that paper because the authors promise that we can even simulate"}, {"start": 212.16, "end": 216.0, "text": " this in-hand shuffling technique in a virtual world."}, {"start": 216.0, "end": 219.04, "text": " Well, I will believe it when I see it."}, {"start": 219.04, "end": 219.76, "text": " Let's see."}, {"start": 220.96, "end": 223.2, "text": " My goodness, look at that."}, {"start": 223.84, "end": 229.12, "text": " Love the attention to detail where the authors color-coded the left and right stack"}, {"start": 229.12, "end": 233.28, "text": " so we can better see how they mix and whether they intersect."}, {"start": 233.92000000000002, "end": 235.52, "text": " Spoiler alert, they don't."}, {"start": 236.08, "end": 237.92000000000002, "text": " What a time to be alive."}, {"start": 237.92000000000002, "end": 241.84, "text": " It can also simulate this piece of cloth with a ton of detail,"}, {"start": 241.84, "end": 246.96, "text": " and not only that, with large time steps, which means that we can advance the time"}, {"start": 246.96, "end": 253.92000000000002, "text": " after each simulation step in bigger packets, thereby speeding up the execution time of the method."}, {"start": 253.92000000000002, "end": 258.32, "text": " I also love how we get a better view of the geometry changes"}, {"start": 258.32, "end": 262.24, "text": " as the other side of the cloth has a different color."}, {"start": 262.24, "end": 264.56, "text": " Once again, great attention to detail."}, {"start": 265.12, "end": 267.92, "text": " Now we are still in the Minutes' Perfume region,"}, {"start": 267.92, "end": 271.36, "text": " and note that this runs on your processor."}, {"start": 271.36, "end": 276.56, "text": " And therefore, if someone can implement this on the graphics card in a smart way,"}, {"start": 276.56, "end": 281.6, "text": " it could become close to real time in at most a couple of papers down the line."}, {"start": 282.0, "end": 288.24, "text": " And this is a research paper that the authors give away to all of us free of charge."}, {"start": 288.24, "end": 289.52000000000004, "text": " How cool is that?"}, {"start": 290.16, "end": 294.96000000000004, "text": " Thank you so much for creating these miracles and just giving them away for free."}, {"start": 294.96000000000004, "end": 297.6, "text": " What a noble and ever research is."}, {"start": 297.6, "end": 300.8, "text": " Perceptilebs is a visual API for TensorFlow,"}, {"start": 300.8, "end": 305.68, "text": " carefully designed to make machine learning as intuitive as possible."}, {"start": 305.68, "end": 309.92, "text": " This gives you a faster way to build out models with more transparency"}, {"start": 309.92, "end": 315.52000000000004, "text": " into how your model is architected, how it performs, and how to debug it."}, {"start": 315.52000000000004, "end": 319.92, "text": " And it even generates visualizations for all the model variables,"}, {"start": 319.92, "end": 324.40000000000003, "text": " and gives you recommendations both during modeling and training,"}, {"start": 324.40000000000003, "end": 327.2, "text": " and does all this automatically."}, {"start": 327.2, "end": 333.36, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 333.36, "end": 339.68, "text": " Visit perceptilebs.com, slash papers, and start using their system for free today."}, {"start": 339.68, "end": 345.03999999999996, "text": " Our thanks to perceptilebs for their support, and for helping us make better videos for you."}, {"start": 345.04, "end": 359.92, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]